diff --git a/content/docs/v2.2.0-alpha.2/ADOPTERS.md b/content/docs/v2.2.0-alpha.2/ADOPTERS.md
new file mode 100644
index 00000000..5b692f64
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/ADOPTERS.md
@@ -0,0 +1,105 @@
+# Antrea Adopters
+
+
+{{< img alt="glasnostic.com" src="docs/assets/adopters/glasnostic-logo.png"height="50" >}}
+
+
+{{< img alt="https://www.transwarp.io" src="docs/assets/adopters/transwarp-logo.png"height="50" >}}
+
+
+{{< img alt="https://www.terasky.com" src="docs/assets/adopters/terasky-logo.png"height="50" >}}
+
+## Success Stories
+
+Below is a list of adopters of Antrea that have publicly shared the details
+of how they use it.
+
+**[Glasnostic](https://glasnostic.com)**
+
+Glasnostic makes modern cloud operations resilient. It does this by shaping how
+systems interact, automatically and in real-time. As a result, DevOps and SRE
+teams can deploy reliably, prevent failure and assure the customer experience.
+We use Antrea's Open vSwitch support to tune how services interact in Kubernetes
+clusters. We are @glasnostic on Twitter.
+
+**[Transwarp](https://www.transwarp.io)**
+
+Transwarp is committed to building enterprise-level big data infrastructure
+software, providing enterprises with infrastructure software and supporting
+around the whole data lifecycle to build a data world of the future.
+
+1. We use Antrea's AntreaClusterNetworkPolicy and AntreaNetworkPolicy to protect
+big data software for every tenant of our kubernetes platform.
+2. We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and
+antrea clusters, and also between antrea clusters
+3. We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and
+antrea nodes in one cluster for upgrading.
+4. We use Antrea's Egress feature to keep the original source ip to ensure
+Internal Pods can get the real source IP of the request.
+
+You can contact us with
+
+**[TeraSky](https://terasky.com)**
+
+TeraSky is a Global Advanced Technology Solutions Provider.
+Antrea is used in our internal Kubernetes clusters as well as by many of our customers.
+Antrea helps us to apply a very strong and flexible security models in Kubernetes.
+We are very heavily utilizing Antrea Cluster Network Policies, Antrea Network Policies,
+and the Egress functionality.
+
+We are @TeraSkycom1 on Twitter.
+
+## Adding yourself as an Adopter
+
+It would be great to have your success story and logo on our list of
+Antrea adopters!
+
+To add yourself, you can follow the steps outlined below, alternatively,
+feel free to reach out via Slack or on Github to have our team
+add your success story and logo.
+
+1. Prepare your addition and PR as described in the Antrea
+[Contributor Guide](CONTRIBUTING.md).
+
+2. Add your name to the success stories, using **bold** format with a link to
+your web site like this: `**[Example](https://example.com)**`
+
+3. Below your name, describe your organization or yourself and how you make
+use of Antrea. Optionally, list the features of Antrea you are using. Please
+keep the line width at 80 characters maximum, and avoid trailing spaces.
+
+4. If you are willing to share contact details, e.g. your Twitter handle, etc.
+add a line where people can find you.
+
+ Example:
+
+ ```markdown
+ **[Example](https://example.com)**
+ Example.com is a company operating internationally, focusing on creating
+ documentation examples. We are using Antrea in our K8s clusters deployed
+ using Kubeadm. We making use of Antrea's Network Policy capabilities.
+ You can reach us on twitter @vmwopensource.
+ ```
+
+5. (Optional) To add your logo, simply drop your logo in PNG or SVG format with
+a maximum size of 50KB to the [adopters](docs/assets/adopters) directory.
+Name the image file something that reflects your company (e.g., if your company
+is called Acme, name the image acme-logo.png). Then add an inline html link
+directly bellow the [Antrea Adopters section](#antrea-adopters). Use the
+following format:
+
+ ```html
+
+ {{< img alt="example.com" src="docs/assets/adopters/example-logo.png" height="50" >}}
+ ```
+
+6. Send a PR with your addition as described in the Antrea
+[Contributor Guide](CONTRIBUTING.md)
+
+## Adding a logo to Antrea.io
+
+We are working on adding an *Adopters* section on [antrea.io][1].
+Follow the steps above to add your organization to the list of Antrea Adopters.
+We will follow up and publish it to the [antrea.io][1] website.
+
+[1]: https://antrea.io
diff --git a/content/docs/v2.2.0-alpha.2/CHANGELOG.md b/content/docs/v2.2.0-alpha.2/CHANGELOG.md
new file mode 100644
index 00000000..f26b5561
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/CHANGELOG.md
@@ -0,0 +1 @@
+Changelogs have been moved to the [CHANGELOG](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/CHANGELOG) directory.
diff --git a/content/docs/v2.2.0-alpha.2/CODE_OF_CONDUCT.md b/content/docs/v2.2.0-alpha.2/CODE_OF_CONDUCT.md
new file mode 100644
index 00000000..94d03ef9
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/CODE_OF_CONDUCT.md
@@ -0,0 +1,3 @@
+# Community Code of Conduct
+
+Project Antrea follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
diff --git a/content/docs/v2.2.0-alpha.2/CONTRIBUTING.md b/content/docs/v2.2.0-alpha.2/CONTRIBUTING.md
new file mode 100644
index 00000000..7f4f02f4
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/CONTRIBUTING.md
@@ -0,0 +1,423 @@
+# Developer Guide
+
+Thank you for taking the time out to contribute to project Antrea!
+
+This guide will walk you through the process of making your first commit and how
+to effectively get it merged upstream.
+
+
+- [Getting Started](#getting-started)
+ - [Accounts Setup](#accounts-setup)
+- [Contribute](#contribute)
+ - [Git Client Hooks](#git-client-hooks)
+ - [GitHub Workflow](#github-workflow)
+ - [Getting reviewers](#getting-reviewers)
+ - [Getting your PR verified by CI](#getting-your-pr-verified-by-ci)
+ - [Cherry-picks to release branches](#cherry-picks-to-release-branches)
+ - [Conventions for Writing Documentation](#conventions-for-writing-documentation)
+ - [Inclusive Naming](#inclusive-naming)
+ - [Building and testing your change](#building-and-testing-your-change)
+ - [Reverting a commit](#reverting-a-commit)
+ - [Sign-off Your Work](#sign-off-your-work)
+- [Issue and PR Management](#issue-and-pr-management)
+ - [Filing An Issue](#filing-an-issue)
+ - [Issue Triage](#issue-triage)
+ - [Issue and PR Kinds](#issue-and-pr-kinds)
+
+
+## Getting Started
+
+To get started, let's ensure you have completed the following prerequisites for
+contributing to project Antrea:
+
+1. Read and observe the [code of conduct](CODE_OF_CONDUCT.md).
+2. Check out the [Architecture document](docs/design/architecture.md) for the Antrea
+ architecture and design.
+3. Set up necessary [accounts](#accounts-setup).
+
+Now that you're setup, skip ahead to learn how to [contribute](#contribute).
+
+### Accounts Setup
+
+At minimum, you need the following accounts for effective participation:
+
+1. **Github**: Committing any change requires you to have a [github
+ account](https://github.com/join).
+2. **Slack**: Join the [Kubernetes Slack](http://slack.k8s.io/) and look for our
+ [#antrea](https://kubernetes.slack.com/messages/CR2J23M0X) channel.
+3. **Google Group**: Join our [mailing list](https://groups.google.com/forum/#!forum/projectantrea-dev).
+
+## Contribute
+
+There are multiple ways in which you can contribute, either by contributing
+code in the form of new features or bug-fixes or non-code contributions like
+helping with code reviews, triaging of bugs, documentation updates, filing
+[new issues](#filing-an-issue) or writing blogs/manuals etc.
+
+In order to help you get your hands "dirty", there is a list of
+[starter](https://github.com/antrea-io/antrea/labels/Good%20first%20issue)
+issues from which you can choose.
+
+### Git Client Hooks
+
+ There are a few recommended git client hooks which we advise you to use. You can find
+ them here:
+ [hack/git_client_side_hooks](hack/git_client_side_hooks).
+ You can run `make install-hooks` to copy them to your local `.git/hooks/` folder, and remove them via `make uninstall-hooks`
+
+### GitHub Workflow
+
+Developers work in their own forked copy of the repository and when ready,
+submit pull requests to have their changes considered and merged into the
+project's repository.
+
+1. Fork your own copy of the repository to your GitHub account by clicking on
+ `Fork` button on [Antrea's GitHub repository](https://github.com/antrea-io/antrea).
+2. Clone the forked repository on your local setup.
+
+ ```bash
+ git clone https://github.com/$user/antrea
+ ```
+
+ Add a remote upstream to track upstream Antrea repository.
+
+ ```bash
+ git remote add upstream https://github.com/antrea-io/antrea
+ ```
+
+ Never push to upstream remote
+
+ ```bash
+ git remote set-url --push upstream no_push
+ ```
+
+3. Create a topic branch.
+
+ ```bash
+ git checkout -b branchName
+ ```
+
+4. Make changes and commit it locally. Make sure that your commit is
+ [signed](#sign-off-your-work).
+
+ ```bash
+ git add
+ git commit -s
+ ```
+
+5. Keeping branch in sync with upstream.
+
+ ```bash
+ git checkout branchName
+ git fetch upstream
+ git rebase upstream/main
+ ```
+
+6. Push local branch to your forked repository.
+
+ ```bash
+ git push -f $remoteBranchName branchName
+ ```
+
+7. Create a Pull request on GitHub.
+ Visit your fork at `https://github.com/antrea-io/antrea` and click
+ `Compare & Pull Request` button next to your `remoteBranchName` branch.
+
+### Getting reviewers
+
+Once you have opened a Pull Request (PR), reviewers will be assigned to your
+PR and they may provide review comments which you need to address.
+Commit changes made in response to review comments to the same branch on your
+fork. Once a PR is ready to merge, squash any *fix review feedback, typo*
+and *merged* sorts of commits.
+
+To make it easier for reviewers to review your PR, consider the following:
+
+1. Follow the golang [coding conventions](https://github.com/golang/go/wiki/CodeReviewComments)
+ and check out this [document](https://github.com/tnqn/code-review-comments#code-review-comments)
+ for common comments we made during reviews and suggestions for fixing them.
+2. Format your code with `make golangci-fix`; if the [linters](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/ci/README.md) flag an issue that
+ cannot be fixed automatically, an error message will be displayed so you can address the issue.
+3. Follow [git commit](https://chris.beams.io/posts/git-commit/) guidelines.
+4. Follow [logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) guidelines.
+5. Please refer to [Conventions for Writing Documentation](#conventions-for-writing-documentation) for
+spelling conventions when writing documentation or commenting code.
+
+If your PR fixes a bug or implements a new feature, add the appropriate test
+cases to our [automated test suite](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/ci/README.md) to guarantee enough
+coverage. A PR that makes significant code changes without contributing new test
+cases will be flagged by reviewers and will not be accepted.
+
+### Getting your PR verified by CI
+
+It is a requirement to get your PR verified with CI checks before it gets merged.
+Also, it helps to find possible bugs before the review work starts. Once you create
+a PR, or you push new commits, CI checks at the bottom of a PR page will be refreshed.
+Checks include Github Action ones and Jenkins ones. Github Action ones will be
+triggered automatically when you push to the head branch of the PR but Jenkins ones
+need to be triggered manually with comments. Please note that if you are a first-time
+contributor, the Github workflows need approval from someone with write access to
+the repo. It's a Github security mechanism.
+
+Here are the trigger phrases for individual checks:
+
+* `/test-e2e`: Linux IPv4 e2e tests
+* `/test-conformance`: Linux IPv4 conformance tests
+* `/test-networkpolicy`: Linux IPv4 networkpolicy tests
+* `/test-all-features-conformance`: Linux IPv4 conformance tests with all features enabled
+* `/test-windows-e2e`: Windows IPv4 e2e tests
+* `/test-windows-conformance`: Windows IPv4 conformance tests
+* `/test-windows-networkpolicy`: Windows IPv4 networkpolicy tests
+* `/test-ipv6-e2e`: Linux dual stack e2e tests
+* `/test-ipv6-conformance`: Linux dual stack conformance tests
+* `/test-ipv6-networkpolicy`: Linux dual stack networkpolicy tests
+* `/test-ipv6-only-e2e`: Linux IPv6 only e2e tests
+* `/test-ipv6-only-conformance`: Linux IPv6 only conformance tests
+* `/test-ipv6-only-networkpolicy`: Linux IPv6 only networkpolicy tests
+* `/test-flexible-ipam-e2e`: Flexible IPAM e2e tests
+* `/test-multicast-e2e`: Multicast e2e tests
+* `/test-multicluster-e2e`: Multicluster e2e tests
+* `/test-vm-e2e`: ExternalNode e2e tests
+* `/test-whole-conformance`: All conformance tests on Linux
+* `/test-hw-offload`: Hardware offloading e2e tests
+* `/test-rancher-e2e`: Linux IPv4 e2e tests on Rancher clusters.
+* `/test-rancher-conformance`: Linux IPv4 conformance tests on Rancher clusters.
+* `/test-rancher-networkpolicy`: Linux IPv4 networkpolicy tests on Rancher clusters.
+* `/test-kind-e2e`: Linux IPv4 e2e tests on Kind cluster.
+* `/test-kind-ipv6-e2e`: Linux dual stack e2e tests on Kind cluster.
+* `/test-kind-ipv6-only-e2e`: Linux IPv6 only e2e tests on Kind cluster.
+* `/test-kind-conformance`: Linux IPv4 conformance tests on Kind cluster.
+* `/test-kind-ipv6-only-conformance`: Linux IPv6 only conformance tests on Kind cluster.
+* `/test-kind-ipv6-conformance`: Linux dual stack conformance tests on Kind cluster.
+* `/test-kind-networkpolicy`: Linux IPv4 networkpolicy tests on Kind cluster.
+* `/test-kind-ipv6-only-networkpolicy`: Linux IPv6 only networkpolicy tests on Kind cluster.
+* `/test-kind-ipv6-networkpolicy`: Linux dual stack networkpolicy tests on Kind cluster.
+* `/test-kind-flexible-ipam-e2e`: Flexible IPAM e2e tests on Kind clusters.
+
+Here are the trigger phrases for groups of checks:
+
+* `/test-all`: Linux IPv4 tests
+* `/test-kind-all`: Linux IPv4 tests on Kind cluster
+* `/test-windows-all`: Windows IPv4 tests, including e2e tests with proxyAll enabled. It also includes all containerd runtime based Windows tests since 1.10.0.
+* `/test-ipv6-all`: Linux dual stack tests
+* `/test-ipv6-only-all`: Linux IPv6 only tests
+* `/test-kind-ipv6-only-all`: Linux IPv6 only tests on Kind cluster.
+* `/test-kind-ipv6-all`: Linux dual stack tests on Kind cluster.
+
+Besides, you can skip a check with `/skip-*`, e.g. `/skip-e2e`: skip Linux IPv4
+e2e tests.
+
+Skipping a check should be used only when the change doesn't influence the
+specific function. For example:
+
+* doc change: skip all checks
+* comment change: skip all checks
+* test/e2e/* change: skip conformance and networkpolicy checks
+* *_windows.go change: skip Linux checks
+
+Besides skipping specific checks you can also cancel all stale running or waiting capv jenkins jobs related to your PR with
+`/stop-all-jobs`.
+
+For more information about the tests we run as part of CI, please refer to
+[ci/README.md](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/ci/README.md).
+
+### Cherry-picks to release branches
+
+If your PR fixes a critical bug, it may need to be backported to older release
+branches which are still maintained. If this is the case, one of the Antrea
+maintainers will let you know once your PR is approved. Please refer to the
+documentation on [cherry-picks](docs/contributors/cherry-picks.md) for more
+information.
+
+### Conventions for Writing Documentation
+
+* Short name of `IP Security` should be `IPsec` as per [rfc 6071](https://datatracker.ietf.org/doc/html/rfc6071).
+* Any Kubernetes object in log/comment should start with upper case, eg: Namespace, Pod, Service.
+
+### Inclusive Naming
+
+For symbol names and documentation, do not introduce new usage of harmful
+language such as 'master / slave' (or 'slave' independent of 'master') and
+'blacklist / whitelist'. For more information about what constitutes harmful
+language and for a reference word replacement list, please refer to the
+[Inclusive Naming Initiative](https://inclusivenaming.org/).
+
+We are committed to removing all harmful language from the project. If you
+detect existing usage of harmful language in code or documentation, please
+report the issue to us or open a Pull Request to address it directly. Thanks!
+
+### Building and testing your change
+
+To build the Antrea Docker image together with all Antrea bits, you can simply
+do:
+
+1. Checkout your feature branch and `cd` into it.
+2. Run `make`
+
+The second step will compile the Antrea code in a `golang` container, and build
+an Ubuntu-based Docker image that includes all the generated binaries. [`Docker`](https://docs.docker.com/install)
+must be installed on your local machine in advance. If you are a macOS user and
+cannot use [Docker Desktop](https://www.docker.com/products/docker-desktop) to
+contribute to Antrea for licensing reasons, check out this
+[document](docs/contributors/docker-desktop-alternatives.md) for possible
+alternatives.
+
+Alternatively, you can build the Antrea code in your local Go environment. The
+Antrea project uses the [Go modules support](https://github.com/golang/go/wiki/Modules) which was introduced in Go 1.11. It
+facilitates dependency tracking and no longer requires projects to live inside
+the `$GOPATH`.
+
+To develop locally, you can follow these steps:
+
+ 1. [Install Go 1.21](https://golang.org/doc/install)
+ 2. Checkout your feature branch and `cd` into it.
+ 3. To build all Go files and install them under `bin`, run `make bin`
+ 4. To run all Go unit tests, run `make test-unit`
+ 5. To build the Antrea Ubuntu Docker image separately with the binaries generated in step 2, run `make ubuntu`
+
+### Reverting a commit
+
+1. Create a branch in your forked repo
+
+ ```bash
+ git checkout -b revertName
+ ```
+
+2. Sync the branch with upstream
+
+ ```bash
+ git fetch upstream
+ git rebase upstream/main
+ ```
+
+3. Create a revert based on the SHA of the commit. The commit needs to be
+ [signed](#sign-off-your-work).
+
+ ```bash
+ git revert -s SHA
+ ```
+
+4. Push this new commit.
+
+ ```bash
+ git push $remoteRevertName revertName
+ ```
+
+5. Create a Pull Request on GitHub.
+ Visit your fork at `https://github.com/antrea-io/antrea` and click
+ `Compare & Pull Request` button next to your `remoteRevertName` branch.
+
+### Sign-off Your Work
+
+As a CNCF project, Antrea must enforce the [Developer Certificate of
+Origin](https://developercertificate.org/) (DCO) on all Pull Requests. We
+require that for all commits constituting the Pull Request, the commit message
+contains the `Signed-off-by` line with an email address that matches the commit
+author. By adding this line to their commit messages, contributors *sign-off*
+that they adhere to the requirements of the DCO.
+
+Git provides the `-s` command-line option to append the required line
+automatically to the commit message:
+
+```bash
+git commit -s -m 'This is my commit message'
+```
+
+For an existing commit, you can also use this option with `--amend`:
+
+```bash
+git commit -s --amend
+```
+
+If more than one person works on something it's possible for more than one
+person to sign-off on it. For example:
+
+```bash
+Signed-off-by: Some Developer somedev@example.com
+Signed-off-by: Another Developer anotherdev@example.com
+```
+
+We use the [DCO Github App](https://github.com/apps/dco) to enforce that all
+commits in a Pull Request include the required `Signed-off-by` line. If this is
+not the case, the app will report a failed status for the Pull Request and it
+will be blocked from being merged.
+
+Compared to our earlier CLA, DCO tends to make the experience simpler for new
+contributors. If you are contributing as an employee, there is no need for your
+employer to sign anything; the DCO assumes you are authorized to submit
+contributions (it's your responsibility to check with your employer).
+
+## Issue and PR Management
+
+We use labels and workflows (some manual, some automated with GitHub Actions) to
+help us manage triage, prioritize, and track issue progress. For a detailed
+discussion, see [docs/issue-management.md](docs/contributors/issue-management.md).
+
+### Filing An Issue
+
+Help is always appreciated. If you find something that needs fixing, please file
+an issue [here](https://github.com/antrea-io/antrea/issues). Please ensure
+that the issue is self explanatory and has enough information for an assignee to
+get started.
+
+Before picking up a task, go through the existing
+[issues](https://github.com/antrea-io/antrea/issues) and make sure that your
+change is not already being worked on. If it does not exist, please create a new
+issue and discuss it with other members.
+
+For simple contributions to Antrea, please ensure that this minimum set of
+labels are included on your issue:
+
+* **kind** -- common ones are `kind/feature`, `kind/support`, `kind/bug`,
+ `kind/documentation`, or `kind/design`. For an overview of the different types
+ of issues that can be submitted, see [Issue and PR
+ Kinds](#issue-and-pr-kinds).
+ The kind of issue will determine the issue workflow.
+* **area** (optional) -- if you know the area the issue belongs in, you can assign it.
+ Otherwise, another community member will label the issue during triage. The
+ area label will identify the area of interest an issue or PR belongs in and
+ will ensure the appropriate reviewers shepherd the issue or PR through to its
+ closure. For an overview of areas, see the
+ [`docs/github-labels.md`](docs/contributors/github-labels.md).
+* **size** (optional) -- if you have an idea of the size (lines of code,
+ complexity, effort) of the issue, you can label it using a size label. The
+ size can be updated during backlog grooming by contributors. This estimate is
+ used to guide the number of features selected for a milestone.
+
+All other labels will be assigned during issue triage.
+
+### Issue Triage
+
+Once an issue has been submitted, the CI (GitHub actions) or a human will
+automatically review the submitted issue or PR to ensure that it has all relevant
+information. If information is lacking or there is another problem with the
+submitted issue, an appropriate `triage/>` label will be applied.
+
+After an issue has been triaged, the maintainers can prioritize the issue with
+an appropriate `priority/>` label.
+
+Once an issue has been submitted, categorized, triaged, and prioritized it
+is marked as `ready-to-work`. A ready-to-work issue should have labels
+indicating assigned areas, prioritization, and should not have any remaining
+triage labels.
+
+### Issue and PR Kinds
+
+Use a `kind` label to describe the kind of issue or PR you are submitting. Valid
+kinds include:
+
+* [`kind/api-change`](docs/contributors/issue-management.md#api-change) -- for api changes
+* [`kind/bug`](docs/contributors/issue-management.md#bug) -- for filing a bug
+* [`kind/cleanup`](docs/contributors/issue-management.md#cleanup) -- for code cleanup and organization
+* [`kind/deprecation`](docs/contributors/issue-management.md#deprecation) -- for deprecating a feature
+* [`kind/design`](docs/contributors/issue-management.md#design) -- for proposing a design or architectural change
+* [`kind/documentation`](docs/contributors/issue-management.md#documentation) -- for updating documentation
+* [`kind/failing-test`](docs/contributors/issue-management.md#failing-test) -- for reporting a failed test (may
+ create with automation in future)
+* [`kind/feature`](docs/contributors/issue-management.md#feature) -- for proposing a feature
+* [`kind/support`](docs/contributors/issue-management.md#support) -- to request support. You may also get support by
+ using our [Slack](https://kubernetes.slack.com/archives/CR2J23M0X) channel for
+ interactive help. If you have not set up the appropriate accounts, please
+ follow the instructions in [accounts setup](#accounts-setup).
+
+For more details on how we manage issues, please read our [Issue Management doc](docs/contributors/issue-management.md).
diff --git a/content/docs/v2.2.0-alpha.2/GOVERNANCE.md b/content/docs/v2.2.0-alpha.2/GOVERNANCE.md
new file mode 100644
index 00000000..a58ca425
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/GOVERNANCE.md
@@ -0,0 +1,85 @@
+# Antrea Governance
+
+This document defines the project governance for Antrea.
+
+## Overview
+
+**Antrea** is committed to building an open, inclusive, productive and
+self-governing open source community focused on building a high-quality
+[Kubernetes Network
+Plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). The
+community is governed by this document which defines how all members should work
+together to achieve this goal.
+
+## Code of Conduct
+
+The Antrea community abides by this [code of conduct](CODE_OF_CONDUCT.md).
+
+## Community Roles
+
+* **Users:** Members that engage with the Antrea community via any medium
+ (Slack, GitHub, mailing lists, etc.).
+* **Contributors:** Do regular contributions to the Antrea project
+ (documentation, code reviews, responding to issues, participating in proposal
+ discussions, contributing code, etc.).
+* **Maintainers**: Responsible for the overall health and direction of the
+ project. They are the final reviewers of PRs and responsible for Antrea
+ releases.
+
+### Contributors
+
+Anyone can contribute to the project (e.g. open a PR) as long as they follow the
+guidelines in [CONTRIBUTING.md](CONTRIBUTING.md).
+
+Frequent contributors to the project can become members of the antrea-io Github
+organization and receive write access to the repository. Write access is
+required to trigger re-runs of workflows in [Github
+Actions](https://docs.github.com/en/actions/managing-workflow-runs/re-running-a-workflow). Becoming
+a member of the antrea-io Github organization does not come with additional
+responsibilities for the contributor, but simplifies the contributing
+process. To become a member, you may [open an
+issue](https://github.com/antrea-io/antrea/issues/new?template=membership.md&title=REQUEST%3A%20New%20membership%20for%20%3Cyour-GH-handle%3E)
+and your membership needs to be approved by two maintainers: approval is
+indicated by leaving a `+1` comment. If a contributor is not active for a
+duration of 12 months (no contribution of any kind), they may be removed from
+the antrea-io Github organization. In case of privilege abuse (members receive
+write access to the organization), any maintainer can decide to disable write
+access temporarily for the member. Within the next 2 weeks, the maintainer must
+either restore the member's privileges, or remove the member from the
+organization. The latter requires approval from at least one other maintainer,
+which must be obtained publicly either on Github or Slack.
+
+### Maintainers
+
+The list of current maintainers can be found in
+[MAINTAINERS.md](MAINTAINERS.md).
+
+While anyone can review a PR and is encouraged to do so, only maintainers are
+allowed to merge the PR. To maintain velocity, only one maintainer's approval is
+required to merge a given PR. In case of a disagreement between maintainers, a
+vote should be called (on Github or Slack) and a simple majority is required in
+order for the PR to be merged.
+
+New maintainers must be nominated from contributors by an existing maintainer
+and must be elected by a [supermajority](#supermajority) of the current
+maintainers. Likewise, maintainers can be removed by a supermajority of the
+maintainers or can resign by notifying the maintainers.
+
+### Supermajority
+
+A supermajority is defined as two-thirds of members in the group.
+
+## Code of Conduct
+
+The code of conduct is overseen by the Antrea project maintainers. Possible code
+of conduct violations should be emailed to the project maintainers at
+.
+
+If the possible violation is against one of the project maintainers that member
+will be recused from voting on the issue. Such issues must be escalated to the
+appropriate CNCF contact, and CNCF may choose to intervene.
+
+## Updating Governance
+
+All substantive changes in Governance require a supermajority vote of the
+maintainers.
diff --git a/content/docs/v2.2.0-alpha.2/MAINTAINERS.md b/content/docs/v2.2.0-alpha.2/MAINTAINERS.md
new file mode 100644
index 00000000..d3ab761f
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/MAINTAINERS.md
@@ -0,0 +1,11 @@
+# Antrea Maintainers
+
+This is the current list of maintainers for the Antrea project. The maintainer
+role is described in [GOVERNANCE.md](GOVERNANCE.md).
+
+| Maintainer | GitHub ID | Affiliation |
+| ---------- | --------- | ----------- |
+| Antonin Bas | antoninbas | VMware |
+| Jianjun Shen | jianjuns | VMware |
+| Quan Tian | tnqn | VMware |
+| Salvatore Orlando | salv-orlando | VMware |
diff --git a/content/docs/v2.2.0-alpha.2/README.md b/content/docs/v2.2.0-alpha.2/README.md
new file mode 100644
index 00000000..9185d210
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/README.md
@@ -0,0 +1,137 @@
+# Antrea
+
+![Antrea Logo](docs/assets/logo/antrea_logo.svg)
+
+![Build Status](https://github.com/antrea-io/antrea/workflows/Go/badge.svg?branch=main)
+[![Go Report Card](https://goreportcard.com/badge/antrea.io/antrea)](https://goreportcard.com/report/antrea.io/antrea)
+[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4173/badge)](https://bestpractices.coreinfrastructure.org/projects/4173)
+[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
+![GitHub release](https://img.shields.io/github/v/release/antrea-io/antrea?display_name=tag&sort=semver)
+[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_shield)
+
+## Overview
+
+Antrea is a [Kubernetes](https://kubernetes.io) networking solution intended
+to be Kubernetes native. It operates at Layer 3/4 to provide networking and
+security services for a Kubernetes cluster, leveraging
+[Open vSwitch](https://www.openvswitch.org/) as the networking data plane.
+
+
+
+Open vSwitch is a widely adopted high-performance programmable virtual
+switch; Antrea leverages it to implement Pod networking and security features.
+For instance, Open vSwitch enables Antrea to implement Kubernetes
+Network Policies in a very efficient manner.
+
+## Prerequisites
+
+Antrea has been tested with Kubernetes clusters running version 1.19 or later.
+
+* `NodeIPAMController` must be enabled in the Kubernetes cluster.\
+ When deploying a cluster with kubeadm the `--pod-network-cidr `
+ option must be specified.
+ Alternately, NodeIPAM feature of Antrea Controller should be enabled and
+ configured.
+* Open vSwitch kernel module must be present on every Kubernetes node.
+
+## Getting Started
+
+Getting started with Antrea is very simple, and takes only a few minutes.
+See how it's done in the [Getting started](docs/getting-started.md) document.
+
+## Contributing
+
+The Antrea community welcomes new contributors. We are waiting for your PRs!
+
+* Before contributing, please get familiar with our
+[Code of Conduct](CODE_OF_CONDUCT.md).
+* Check out the Antrea [Contributor Guide](CONTRIBUTING.md) for information
+about setting up your development environment and our contribution workflow.
+* Learn about Antrea's [Architecture and Design](docs/design/architecture.md).
+Your feedback is more than welcome!
+* Check out [Open Issues](https://github.com/antrea-io/antrea/issues).
+* Join the Antrea [community](#community) and ask us any question you may have.
+
+### Community
+
+* Join the [Kubernetes Slack](http://slack.k8s.io/) and look for our
+[#antrea](https://kubernetes.slack.com/messages/CR2J23M0X) channel.
+* Check the [Antrea Team Calendar](https://calendar.google.com/calendar/embed?src=uuillgmcb1cu3rmv7r7jrhcrco%40group.calendar.google.com)
+ and join the developer and user communities!
+ + The [Antrea community meeting](https://broadcom.zoom.us/j/823654111?pwd=MEV6blNtUUtqallVSkVFSGZtQ1kwUT09),
+every two weeks on Tuesday at 5AM GMT+1 (United Kingdom time). See Antrea team calendar for localized times.
+ - [Meeting minutes](https://github.com/antrea-io/antrea/wiki/Community-Meetings)
+ - [Meeting recordings](https://www.youtube.com/playlist?list=PLuzde2hYeDBdw0BuQCYbYqxzoJYY1hfwv)
+ + [Antrea live office hours](https://antrea.io/live) archives.
+* Join our mailing lists to always stay up-to-date with Antrea development:
+ + [projectantrea-announce](https://groups.google.com/forum/#!forum/projectantrea-announce)
+for important project announcements.
+ + [projectantrea](https://groups.google.com/forum/#!forum/projectantrea)
+for updates about Antrea or provide feedback.
+ + [projectantrea-dev](https://groups.google.com/forum/#!forum/projectantrea-dev)
+to participate in discussions on Antrea development.
+
+Also check out [@ProjectAntrea](https://twitter.com/ProjectAntrea) on Twitter!
+
+## Features
+
+* **Kubernetes-native**: Antrea follows best practices to extend the Kubernetes
+ APIs and provide familiar abstractions to users, while also leveraging
+ Kubernetes libraries in its own implementation.
+* **Powered by Open vSwitch**: Antrea relies on Open vSwitch to implement all
+ networking functions, including Kubernetes Service load-balancing, and to
+ enable hardware offloading in order to support the most demanding workloads.
+* **Run everywhere**: Run Antrea in private clouds, public clouds and on bare
+ metal, and select the appropriate traffic mode (with or without overlay) based
+ on your infrastructure and use case.
+* **Comprehensive policy model**: Antrea provides a comprehensive network policy
+ model, which builds upon Kubernetes Network Policies with new features such as
+ policy tiering, rule priorities, cluster-level policies, and Node policies.
+ Refer to the [Antrea Network Policy documentation](docs/antrea-network-policy.md)
+ for a full list of features.
+* **Windows Node support**: Thanks to the portability of Open vSwitch, Antrea
+ can use the same data plane implementation on both Linux and Windows
+ Kubernetes Nodes.
+* **Multi-cluster networking**: Federate multiple Kubernetes clusters and
+ benefit from a unified data plane (including multi-cluster Services) and a
+ unified security posture. Refer to the [Antrea Multi-cluster documentation](docs/multicluster/user-guide.md)
+ to get started.
+* **Troubleshooting and monitoring tools**: Antrea comes with CLI and UI tools
+ which provide visibility and diagnostics capabilities (packet tracing, policy
+ analysis, flow inspection). It exposes Prometheus metrics and supports
+ exporting network flow information to collectors and analyzers.
+* **Network observability and analytics**: Antrea + [Theia](https://github.com/antrea-io/theia)
+ enable fine-grained visibility into the communication among Kubernetes
+ workloads. Theia provides visualization for Antrea network flows in Grafana
+ dashboards, and recommends Network Policies to secure the workloads.
+* **Network Policies for virtual machines**: Antrea-native policies can be
+ enforced on non-Kubernetes Nodes including VMs and baremetal servers. Project
+ [Nephe](https://github.com/antrea-io/nephe) implements security policies for
+ VMs across clouds, leveraging Antrea-native policies.
+* **Encryption**: Encryption of inter-Node Pod traffic with IPsec or WireGuard
+ tunnels.
+* **Easy deployment**: Antrea is deployed by applying a single YAML manifest
+ file.
+
+To explore more Antrea features and their usage, check the [Getting started](docs/getting-started.md#features)
+document and user guides in the [Antrea documentation folder](docs/). Refer to
+the [Changelogs](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/CHANGELOG/README.md) for a detailed list of features
+introduced for each version release.
+
+## Adopters
+
+For a list of Antrea Adopters, please refer to [ADOPTERS.md](ADOPTERS.md).
+
+## Roadmap
+
+We are adding features very quickly to Antrea. Check out the list of features we
+are considering on our [Roadmap](ROADMAP.md) page. Feel free to throw your ideas
+in!
+
+## License
+
+Antrea is licensed under the [Apache License, version 2.0](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/LICENSE)
+
+[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_large)
diff --git a/content/docs/v2.2.0-alpha.2/ROADMAP.md b/content/docs/v2.2.0-alpha.2/ROADMAP.md
new file mode 100644
index 00000000..aed9a7d4
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/ROADMAP.md
@@ -0,0 +1,119 @@
+# Antrea Roadmap
+
+This document lists the new features being considered for the future. The
+intention is for Antrea contributors and users to know what features could come
+in the near future, and to share feedback and ideas. Priorities for the project
+may change over time and so this roadmap is likely to evolve. A feature that is
+not listed now does not mean it will not be considered for Antrea. We definitely
+welcome suggestions and ideas from everyone about the roadmap and Antrea
+features. Reach us through Issues, Slack and / or Google Group!
+
+## Roadmap Items
+
+### Antrea v2
+
+Antrea [version 2](https://github.com/antrea-io/antrea/issues/4832) is coming in
+2024. We are graduating some popular features to Beta or GA, deprecating some
+legacy APIs, dropping support for old K8s versions (< 1.19) to improve support
+for newer ones, and more! This is a big milestone for the project, stay tuned!
+
+### Quality of life improvements for installation and upgrade
+
+We have a few things planned to improve basic usability:
+
+* provide separate container images for the Agent and Controller: this will
+ reduce image size and speed up deployment of new Antrea versions.
+* support for installation and upgrade using the antctl CLI: this will provide
+ an alternative installation method and antctl will ensure that Antrea
+ components are upgraded in the right order to minimize workload disruption.
+* CLI tools to facilitate migration from another CNI: we will take care of
+ provisioning the correct network resources for your existing workloads.
+
+### Core networking features
+
+We are working on adding BGP support to the Antrea Agent, as it has been a much
+requested feature. Take a look at [#5948](https://github.com/antrea-io/antrea/issues/5948)
+if this is something you are interested in.
+
+### Windows support improvements
+
+Antrea [supports Windows K8s Nodes](docs/windows.md). However, a few features
+including: Egress, NodePortLocal, IPsec encryption are not supported for Windows
+yet. We will continue to add more features for Windows (starting with Egress)
+and aim for feature parity with Linux. We encourage users to reach out if they
+would like us to prioritize a specific feature. While the installation procedure
+has improved significantly since we first added Windows support, we plan to keep
+on streamlining the procedure (more automation) and on improving the user
+documentation.
+
+### More robust FQDN support in Antrea NetworkPolicy
+
+Antrea provides a comprehensive network policy model, which builds upon K8s
+Network Policies and provides many additional capabilities. One of them is the
+ability to define policy rules using domain names (FQDNs). We think there is
+some room to improve user experience with this feature, and we are working on
+making it more stable.
+
+### Implementation of new upstream NetworkPolicy APIs
+
+[SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)
+is working on [new standard APIs](https://network-policy-api.sigs.k8s.io/) to
+extend the base K8s NetworkPolicy resource. We are closely monitoring the
+upstream work and implementing these APIs as their development matures.
+
+### Better network troubleshooting with packet capture
+
+Antrea comes with many tools for network diagnostics and observability. You may
+already be familiar with Traceflow, which lets you trace a single packet through
+the Antrea network. We plan on also providing users with the ability to capture
+live traffic and export it in PCAP format. Think tcpdump, but for K8s and
+through a dedicated Antrea API!
+
+### Multi-network support for Pods
+
+We recently added the SecondaryNetwork feature, which supports provisioning
+additional networks for Pods, using the same constructs made popular by
+[Multus](https://github.com/k8snetworkplumbingwg/multus-cni). However, at the
+moment, options for network "types" are limited. We plan on supporting new use
+cases (e.g., secondary network overlays, network acceleration with DPDK), as
+well as on improving user experience for this feature (with some useful
+documentation).
+
+### L7 security policy
+
+Support for L7 NetworkPolicies was added in version 1.10, providing the ability
+to select traffic based on the application-layer context. However, the feature
+currently only supports HTTP and TLS traffic, and we plan to extend support to
+other protocols, such as DNS.
+
+### Multi-cluster networking
+
+Antrea can federate multiple K8s clusters, but this feature (introduced in
+version 1.7) is still considered Alpha today. Most of the functionality is
+already there (multi-cluster Services, cross-cluster connectivity,
+and multi-cluster NetworkPolicies), but we think there is some room for
+improvement when it comes to stability and usability.
+
+### NetworkPolicy scale and performance tests
+
+We are working on a framework to empower contributors and users to benchmark the
+performance of Antrea at scale.
+
+### Investigate better integration with service meshes
+
+As service meshes start introducing alternatives to the sidecar approach,
+we believe there is an opportunity to improve the synergy between the K8s
+network plugin and the service mesh provider. In particular, we are looking at
+how Antrea can integrate with the new Istio ambient data plane mode. Take a look
+at [#5682](https://github.com/antrea-io/antrea/issues/5682) for more
+information.
+
+### Investigate multiple replicas for the Controller
+
+While today the Antrea Controller can scale to 1000s of K8s Nodes and 100,000
+Pods, and failover to a new replica in case of failure can happen in under a
+minute, we believe we should still investigate the possibility of deploying
+multiple replicas for the Controller (Active-Active or Active-Standby), to
+enable horizontal scaling and achieve high-availability with very quick
+failover. Horizontal scaling could help reduce the memory footprint of each
+Controller instance for very large K8s clusters.
diff --git a/content/docs/v2.2.0-alpha.2/SECURITY.md b/content/docs/v2.2.0-alpha.2/SECURITY.md
new file mode 100644
index 00000000..6456edbd
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/SECURITY.md
@@ -0,0 +1,81 @@
+# Security Procedures
+
+The Antrea community holds security in the highest regard.
+The community adopted this security disclosure policy to ensure vulnerabilities are responsibly handled.
+
+## Reporting a Vulnerability
+
+If you believe you have identified a vulnerability, please work with the Antrea maintainers to fix it and disclose the issue responsibly.
+All security issues, confirmed or suspected, should be reported privately.
+Please avoid using github issues, and instead report the vulnerability to .
+
+A vulnerability report should be filed if any of the following applies:
+
+* You have discovered and confirmed a vulnerability in Antrea.
+* You believe Antrea might be vulnerable to some published [CVE](https://cve.mitre.org/cve/).
+* You have found a potential security flaw in Antrea but you're not yet sure whether there's a viable attack vector.
+* You have confirmed or suspect any of Antrea's dependencies has a vulnerability.
+
+### Vulnerability report template
+
+Provide a descriptive subject and include the following information in the body:
+
+* Detailed steps to reproduce the vulnerability (scripts, screenshots, packet captures, manual procedures, etc.).
+* Describe the effects of the vulnerability on the Kubernetes cluster, on the applications running on it, and on the underlying infrastructure, if applicable.
+* How the vulnerability affects Antrea workflows.
+* Potential attack vectors and an estimation of the attack surface, if applicable.
+* Other software that was used to expose the vulnerability.
+
+## Responding to a vulnerability
+
+A coordinator is assigned to each reported security issue. The coordinator is a member from the Antrea maintainers team, and will drive the fix and disclosure process.
+At the moment reports are received via email at .
+The first steps performed by the coordinator are to confirm the validity of the report and send an embargo reminder to all parties involved.
+Antrea maintainers and issue reporters will review the issue for confirmation of impact and determination of affected components.
+
+With reference to the scale reported below, reported vulnerabilities will be disclosed and treated as regular issues if their issue risk is low (level 4 or higher in the scale).
+For these lower-risk issues the fix process will proceed with the usual github workflow.
+
+### Reference taxonomy for issue risk
+
+1. Vulnerability must be fixed in main and any other supported branch.
+2. Vulnerability must be fixed in main only for next release.
+3. Vulnerability in experimental features or troubleshooting code.
+4. Vulnerability without practical attack vector (e.g.: needs GUID guessing).
+5. Not a vulnerability per se, but an opportunity to strengthen security (in code, architecture, protocols, and/or processes).
+6. Not a vulnerability or a strengthening opportunity.
+7. Vulnerability only exist in some PR or non-release branch.
+
+## Developing a patch for a vulnerability
+
+This part of the process applies only to confirmed vulnerabilities.
+The reporter and Antrea maintainers, plus anyone they deem necessary to develop and validate a fix will be included the discussion.
+
+**Please refrain from creating a PR for the fix!**
+
+A fix is proposed as a patch to the current main branch, formatted with:
+
+```bash
+git format-patch --stdout HEAD~1 > path/to/local/file.patch
+```
+
+and then sent to .
+
+**Please don't push the patch to the Antrea fork on your github account!**
+
+Patch review will be performed via email. Reviewers will suggest modifications and/or improvements, and then pre-approve it for merging.
+Pre-approval will ensure patches can be fast-tracked through public code review later at disclosure time.
+
+## Disclosing the vulnerability
+
+In preparation for this, at least a maintainer must be available to help pushing the fix at disclosure time.
+
+At the disclosure time, one of the maintainers (or the reporter) will open an issue on github and create a PR with the patch for the main branch and any other applicable branch.
+Available maintainers will fast-track approvals and merge the patch.
+
+Regardless of the owner of the issue and the corresponding PR, the original reporter and the submitter of the fix will be properly credited.
+As for the git history, the commit message and author of the pre-approved patch will be preserved in the final patch submitted into the Antrea repository.
+
+### Notes
+
+At the moment the Antrea project does not have a process to assign a CVE to a confirmed vulnerability.
diff --git a/content/docs/v2.2.0-alpha.2/_index.md b/content/docs/v2.2.0-alpha.2/_index.md
new file mode 100644
index 00000000..268d5b61
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/_index.md
@@ -0,0 +1,7 @@
+---
+cascade:
+ layout: docs
+ version: v2.2.0-alpha.2
+---
+
+{{% include-md "README.md" %}}
diff --git a/content/docs/v2.2.0-alpha.2/docs/admin-network-policy.md b/content/docs/v2.2.0-alpha.2/docs/admin-network-policy.md
new file mode 100644
index 00000000..4fac588f
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/admin-network-policy.md
@@ -0,0 +1,119 @@
+# AdminNetworkPolicy API Support in Antrea
+
+## Table of Contents
+
+
+- [Introduction](#introduction)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [Sample specs for AdminNetworkPolicy and BaselineAdminNetworkPolicy](#sample-specs-for-adminnetworkpolicy-and-baselineadminnetworkpolicy)
+ - [Relationship with Antrea-native Policies](#relationship-with-antrea-native-policies)
+
+
+## Introduction
+
+Kubernetes provides the NetworkPolicy API as a simple way for developers to control traffic flows of their applications.
+While NetworkPolicy is embraced throughout the community, it was designed for developers instead of cluster admins.
+Therefore, traits such as the lack of explicit deny rules make securing workloads at the cluster level difficult.
+The Network Policy API working group (subproject of Kubernetes SIG-Network) has then introduced the
+[AdminNetworkPolicy APIs](https://network-policy-api.sigs.k8s.io/api-overview/) which aims to solve the cluster admin
+policy usecases.
+
+Starting with v1.13, Antrea supports the `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` API types, except for
+advanced Namespace selection mechanisms (namely `sameLabels` and `notSameLabels` rules) which are still in the
+experimental phase and not required as part of conformance.
+
+## Prerequisites
+
+AdminNetworkPolicy was introduced in v1.13 as an alpha feature and is disabled by default. A feature gate,
+`AdminNetworkPolicy`, must be enabled in antrea-controller.conf in the `antrea-config` ConfigMap when Antrea is deployed:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ AdminNetworkPolicy: true
+```
+
+Note that the `AdminNetworkPolicy` feature also requires the `AntreaPolicy` featureGate to be set to true, which is
+enabled by default since Antrea v1.0.
+
+In addition, the AdminNetworkPolicy CRD types need to be installed in the K8s cluster.
+Refer to [this document](https://network-policy-api.sigs.k8s.io/getting-started/) for more information.
+
+## Usage
+
+### Sample specs for AdminNetworkPolicy and BaselineAdminNetworkPolicy
+
+Please refer to the [examples page](https://network-policy-api.sigs.k8s.io/reference/examples/) of the network-policy-api
+repo, which contains several user stories for the AdminNetworkPolicy APIs, as well as sample specs for each of the user
+story. Shown below are sample specs of `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` for demonstration purposes:
+
+```yaml
+apiVersion: policy.networking.k8s.io/v1alpha1
+kind: AdminNetworkPolicy
+metadata:
+ name: cluster-wide-deny-example
+spec:
+ priority: 10
+ subject:
+ namespaces:
+ matchLabels:
+ kubernetes.io/metadata.name: sensitive-ns
+ ingress:
+ - action: Deny
+ from:
+ - namespaces:
+ namespaceSelector: {}
+ name: select-all-deny-all
+```
+
+```yaml
+apiVersion: policy.networking.k8s.io/v1alpha1
+kind: BaselineAdminNetworkPolicy
+metadata:
+ name: default
+spec:
+ subject:
+ namespaces: {}
+ ingress:
+ - action: Deny # zero-trust cluster default security posture
+ from:
+ - namespaces:
+ namespaceSelector: {}
+```
+
+Note that for a single cluster, the `BaselineAdminNetworkPolicy` resource is supported as a singleton with the name of
+`default`.
+
+### Relationship with Antrea-native Policies
+
+AdminNetworkPolicy API objects and Antrea-native policies can co-exist with each other in the same cluster.
+
+AdminNetworkPolicy and BaselineAdminNetworkPolicy API types provide K8s upstream supported, cluster admin facing
+guardrails that are portable and CNI-agnostic. AntreaClusterNetworkPolicy and AntreaNetworkPolicy on the other hand,
+are designed for similar use cases but provide a richer feature set, including FQDN policies, nodeSelectors and L7 rules.
+See the [Antrea-native policy doc](antrea-network-policy.md) and [L7 policy doc](antrea-network-policy.md) for details.
+
+Both the AdminNetworkPolicy object and Antrea-native policy objects use a `priority` field to determine its precedence
+compared to other policy objects. The following diagram describes the relative precedence between the AdminNetworkPolicy
+API types and Antrea-native policy types:
+
+```text
+Antrea-native Policies (tier != baseline) >
+AdminNetworkPolicies >
+K8s NetworkPolicies >
+Antrea-native Policies (tier == baseline) >
+BaselineAdminNetworkPolicy
+```
+
+In other words, any Antrea-native policies that are not created in the `baseline` tier will have higher precedence over,
+and thus evaluated before, all AdminNetworkPolicies at any `priority`. Effectively, the AdminNetworkPolicy objects are
+associated with a tier priority lower than Antrea-native policies, but higher than K8s NetworkPolicies. Similarly,
+baseline-tier Antrea-native policies will have a higher precedence over the BaselineAdminNetworkPolicy object.
+For more information on policy and rule precedence, refer to [this section](antrea-network-policy.md#notes-and-constraints).
diff --git a/content/docs/v2.2.0-alpha.2/docs/aks-installation.md b/content/docs/v2.2.0-alpha.2/docs/aks-installation.md
new file mode 100644
index 00000000..336ca826
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/aks-installation.md
@@ -0,0 +1,283 @@
+# Deploying Antrea on AKS and AKS Engine
+
+This document describes steps to deploy Antrea to an AKS cluster or an AKS
+Engine cluster.
+
+## Deploy Antrea to an AKS cluster
+
+Antrea can be deployed to an AKS cluster either in `networkPolicyOnly` mode or
+in `encap` mode.
+
+In `networkPolicyOnly` mode, Antrea enforces NetworkPolicies and implements
+other services for the AKS cluster, while the Azure CNI takes care of Pod IPAM
+and traffic routing across Nodes. For more information about `networkPolicyOnly`
+mode, refer to [this design document](design/policy-only.md).
+
+In `encap` mode, Antrea is in charge of Pod IPAM and of all the networking
+functions on the Nodes. Using `encap` mode provides access to additional Antrea
+features, such as Multicast, as inter-Node Pod traffic is encapsulated, and is
+not handled directly by the Azure Virtual Network. Note that the [caveats](eks-installation.md#deploying-antrea-in-encap-mode)
+which apply when deploying Antrea in `encap` mode on EKS do *not* apply for AKS.
+
+We recommend `encap` mode, as it will give you access to the most Antrea
+features.
+
+### AKS Prerequisites
+
+Install the Azure Cloud CLI. Refer to [Azure CLI installation guide](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
+
+We recommend using the latest version available (use at least version 2.39.0).
+
+### Deploying Antrea in `networkPolicyOnly` mode
+
+#### Creating the cluster
+
+You can use any method to create an AKS cluster. The example given here is using the Azure Cloud CLI.
+
+1. Create an AKS cluster
+
+ ```bash
+ export RESOURCE_GROUP_NAME=aks-antrea-cluster
+ export CLUSTER_NAME=aks-antrea-cluster
+ export LOCATION=westus
+
+ az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 2 \
+ --network-plugin azure
+ ```
+
+ **Note** Do not specify network-policy option.
+
+2. Get AKS cluster credentials
+
+ ```bash
+ az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME
+ ```
+
+3. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-84330359-vmss000000 Ready agent 6m21s v1.16.10
+ aks-nodepool1-84330359-vmss000001 Ready agent 6m25s v1.16.10
+ ```
+
+#### Deploying Antrea
+
+1. Prepare the cluster Nodes
+
+ Deploy ``antrea-node-init`` DaemonSet to enable ``azure cni`` to operate in transparent mode.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-aks-node-init.yml
+ ```
+
+2. Deploy Antrea
+
+ To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that AKS support was added in release 0.9.0, which means you cannot
+pick a release older than 0.9.0. For any given release `` (e.g. `v0.9.0`),
+you can deploy Antrea as follows:
+
+ ```bash
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-aks.yml
+ ```
+
+ To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-aks.yml):
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-aks.yml
+ ```
+
+ The command will deploy a single replica of Antrea controller to the AKS
+cluster and deploy Antrea agent to every Node. After a successful deployment
+you should be able to see these Pods running in your cluster:
+
+ ```bash
+ $ kubectl get pods --namespace kube-system -l app=antrea
+ NAME READY STATUS RESTARTS AGE
+ antrea-agent-bpj72 2/2 Running 0 40s
+ antrea-agent-j2sjz 2/2 Running 0 40s
+ antrea-controller-6f7468cbff-5sk4t 1/1 Running 0 43s
+ antrea-node-init-6twqg 1/1 Running 0 2m
+ antrea-node-init-mqsqr 1/1 Running 0 2m
+ ```
+
+3. Restart remaining Pods
+
+ Once Antrea is up and running, restart all Pods in all Namespaces (kube-system, etc) so they can be managed by Antrea.
+
+ ```bash
+ kubectl delete pods -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{ print $1 }')
+ pod "coredns-544d979687-96xm9" deleted
+ pod "coredns-544d979687-p7dfb" deleted
+ pod "coredns-autoscaler-78959b4578-849k8" deleted
+ pod "dashboard-metrics-scraper-5f44bbb8b5-5qkkx" deleted
+ pod "kube-proxy-6qxdw" deleted
+ pod "kube-proxy-h6d89" deleted
+ pod "kubernetes-dashboard-785654f667-7twsm" deleted
+ pod "metrics-server-85c57978c6-pwzcx" deleted
+ pod "tunnelfront-649ff5fb55-5lxg7" deleted
+ ```
+
+### Deploying Antrea in `encap` mode
+
+AKS now officially supports [Bring your own Container Network Interface (BYOCNI)](https://learn.microsoft.com/en-us/azure/aks/use-byo-cni).
+Thanks to this, you can deploy Antrea on AKS in `encap` mode, and you will not
+lose access to any functionality. Check the AKS BYOCNI documentation for
+prerequisites, in particular for AKS version requirements.
+
+#### Creating the cluster
+
+You can use any method to create an AKS cluster. The example given here is using the Azure Cloud CLI.
+
+1. Create an AKS cluster
+
+ ```bash
+ export RESOURCE_GROUP_NAME=aks-antrea-cluster
+ export CLUSTER_NAME=aks-antrea-cluster
+ export LOCATION=westus
+
+ az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 2 \
+ --network-plugin none
+ ```
+
+ Notice `--network-plugin none`, which tells AKS not to install any CNI plugin.
+
+2. Get AKS cluster credentials
+
+ ```bash
+ az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME
+ ```
+
+3. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-40948307-vmss000000 NotReady agent 18m v1.27.7
+ aks-nodepool1-40948307-vmss000001 NotReady agent 17m v1.27.7
+ ```
+
+ The Nodes are supposed to report a `NotReady` Status, since no CNI plugin is
+ installed yet.
+
+#### Deploying Antrea
+
+You can use Helm to easily install Antrea (or any other supported installation
+method). Just make sure that you configure Antrea NodeIPAM:
+
+```bash
+# you may not need this:
+helm repo add antrea https://charts.antrea.io
+helm repo update
+
+cat <> values-aks.yml
+nodeIPAM:
+ enable: true
+ clusterCIDRs: ["10.10.0.0/16"]
+EOF
+
+helm install -n kube-system -f values-aks.yml antrea antrea/antrea
+```
+
+For more information about how to configure Antrea Node IPAM, please refer to
+[Antrea Node IPAM guide](antrea-ipam.md#running-nodeipam-within-antrea-controller).
+
+After a while, make sure that all your Nodes report a `Ready` Status and that
+all your Pods are running correctly. Some Pods, and in particular the
+`metrics-server` Pods, may restart once after installing Antrea; this is not an
+issue.
+
+After a successful installation, Pods should look like this:
+
+```bash
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system antrea-agent-bpskv 2/2 Running 0 7m34s
+kube-system antrea-agent-pfqrn 2/2 Running 0 7m34s
+kube-system antrea-controller-555b8c799d-wk8zz 1/1 Running 0 7m34s
+kube-system cloud-node-manager-2nszz 1/1 Running 0 31m
+kube-system cloud-node-manager-wj68q 1/1 Running 0 31m
+kube-system coredns-789789675-2nwd7 1/1 Running 0 6m48s
+kube-system coredns-789789675-lbkfn 1/1 Running 0 31m
+kube-system coredns-autoscaler-649b947bbd-j5wqc 1/1 Running 0 31m
+kube-system csi-azuredisk-node-4bnnl 3/3 Running 0 31m
+kube-system csi-azuredisk-node-52nwd 3/3 Running 0 31m
+kube-system csi-azurefile-node-2h66l 3/3 Running 0 31m
+kube-system csi-azurefile-node-dhrf2 3/3 Running 0 31m
+kube-system konnectivity-agent-5fc7989878-6nhwl 1/1 Running 0 31m
+kube-system konnectivity-agent-5fc7989878-t2n6h 1/1 Running 0 30m
+kube-system kube-proxy-96c9p 1/1 Running 0 31m
+kube-system kube-proxy-x8g8s 1/1 Running 0 31m
+kube-system metrics-server-5955767688-2hjvn 2/2 Running 0 3m45s
+kube-system metrics-server-5955767688-vmcq7 2/2 Running 0 3m45s
+```
+
+## Deploy Antrea to an AKS Engine cluster
+
+Antrea is an integrated CNI of AKS Engine, and can be installed in
+`networkPolicyOnly` mode or `encap` mode to an AKS Engine cluster as part of the
+AKS Engine cluster deployment. To learn basics of AKS Engine cluster deployment,
+please refer to [AKS Engine Quickstart Guide](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/quickstart.md).
+
+### Deploying Antrea in `networkPolicyOnly` mode
+
+To configure Antrea to enforce NetworkPolicies for the AKS Engine cluster,
+`"networkPolicy": "antrea"` needs to be set in `kubernetesConfig` of the AKS
+Engine cluster definition (Azure CNI will be used as the `networkPlugin`):
+
+```json
+ "apiVersion": "vlabs",
+ "properties": {
+ "orchestratorProfile": {
+ "kubernetesConfig": {
+ "networkPolicy": "antrea"
+ }
+ }
+ }
+```
+
+You can use the deployment template
+[`examples/networkpolicy/kubernetes-antrea.json`](https://github.com/Azure/aks-engine/blob/master/examples/networkpolicy/kubernetes-antrea.json)
+to deploy an AKS Engine cluster with Antrea in `networkPolicyOnly` mode:
+
+```bash
+$ aks-engine deploy --dns-prefix \
+ --resource-group \
+ --location westus2 \
+ --api-model examples/networkpolicy/kubernetes-antrea.json \
+ --auto-suffix
+```
+
+### Deploying Antrea in `encap` mode
+
+To deploy Antrea in `encap` mode for an AKS Engine cluster, both
+`"networkPlugin": "antrea"` and `"networkPolicy": "antrea"` need to be set in
+`kubernetesConfig` of the AKS Engine cluster definition:
+
+```json
+ "apiVersion": "vlabs",
+ "properties": {
+ "orchestratorProfile": {
+ "kubernetesConfig": {
+ "networkPlugin": "antrea",
+ "networkPolicy": "antrea"
+ }
+ }
+ }
+```
+
+You can add `"networkPlugin": "antrea"` to the deployment template
+[`examples/networkpolicy/kubernetes-antrea.json`](https://github.com/Azure/aks-engine/blob/master/examples/networkpolicy/kubernetes-antrea.json),
+and use the template to deploy an AKS Engine cluster with Antrea in `encap`
+mode.
diff --git a/content/docs/v2.2.0-alpha.2/docs/antctl.md b/content/docs/v2.2.0-alpha.2/docs/antctl.md
new file mode 100644
index 00000000..62073d87
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/antctl.md
@@ -0,0 +1,900 @@
+# Antctl
+
+antctl is the command-line tool for Antrea. At the moment, antctl supports
+running in three different modes:
+
+* "controller mode": when run out-of-cluster or from within the Antrea
+ Controller Pod, antctl can connect to the Antrea Controller and query
+ information from it (e.g. the set of computed NetworkPolicies).
+* "agent mode": when run from within an Antrea Agent Pod, antctl can connect to
+ the Antrea Agent and query information local to that Agent (e.g. the set of
+ computed NetworkPolicies received by that Agent from the Antrea Controller, as
+ opposed to the entire set of computed policies).
+* "flowaggregator mode": when run from within a Flow Aggregator Pod, antctl can
+ connect to the Flow Aggregator and query information from it (e.g. flow records
+ related statistics).
+
+## Table of Contents
+
+
+- [Installation](#installation)
+- [Usage](#usage)
+ - [Showing or changing log verbosity level](#showing-or-changing-log-verbosity-level)
+ - [Showing feature gates status](#showing-feature-gates-status)
+ - [Performing checks to facilitate installation process](#performing-checks-to-facilitate-installation-process)
+ - [Pre-installation checks](#pre-installation-checks)
+ - [Post-installation checks](#post-installation-checks)
+ - [Collecting support information](#collecting-support-information)
+ - [controllerinfo and agentinfo commands](#controllerinfo-and-agentinfo-commands)
+ - [NetworkPolicy commands](#networkpolicy-commands)
+ - [Mapping endpoints to NetworkPolicies](#mapping-endpoints-to-networkpolicies)
+ - [Evaluating expected NetworkPolicy behavior](#evaluating-expected-networkpolicy-behavior)
+ - [Dumping Pod network interface information](#dumping-pod-network-interface-information)
+ - [Dumping OVS flows](#dumping-ovs-flows)
+ - [OVS packet tracing](#ovs-packet-tracing)
+ - [Traceflow](#traceflow)
+ - [Antctl Proxy](#antctl-proxy)
+ - [Flow Aggregator commands](#flow-aggregator-commands)
+ - [Dumping flow records](#dumping-flow-records)
+ - [Record metrics](#record-metrics)
+ - [Multi-cluster commands](#multi-cluster-commands)
+ - [Multicast commands](#multicast-commands)
+ - [Showing memberlist state](#showing-memberlist-state)
+ - [BGP commands](#bgp-commands)
+ - [Upgrade existing objects of CRDs](#upgrade-existing-objects-of-crds)
+
+
+## Installation
+
+The antctl binary is included in the Antrea Docker images
+(`antrea/antrea-agent-ubuntu`, `antrea/antrea-controller-ubuntu`) which means
+that there is no need to install anything to connect to the Antrea Agent. Simply
+exec into the antrea-agent container for the appropriate antrea-agent Pod and
+run `antctl`:
+
+```bash
+kubectl exec -it ANTREA-AGENT_POD_NAME -n kube-system -c antrea-agent -- bash
+> antctl help
+```
+
+Starting with Antrea release v0.5.0, we publish the antctl binaries for
+different OS / CPU Architecture combinations. Head to the [releases
+page](https://github.com/antrea-io/antrea/releases) and download the
+appropriate one for your machine. For example:
+
+On Mac & Linux:
+
+```bash
+curl -Lo ./antctl "https://github.com/antrea-io/antrea/releases/download//antctl-$(uname)-x86_64"
+chmod +x ./antctl
+mv ./antctl /some-dir-in-your-PATH/antctl
+antctl version
+```
+
+For Linux, we also publish binaries for Arm-based systems.
+
+On Windows, using PowerShell:
+
+```powershell
+Invoke-WebRequest -Uri https://github.com/antrea-io/antrea/releases/download//antctl-windows-x86_64.exe -Outfile antctl.exe
+Move-Item .\antctl.exe c:\some-dir-in-your-PATH\antctl.exe
+antctl version
+```
+
+## Usage
+
+To see the list of available commands and options, run `antctl help`. The list
+will be different based on whether you are connecting to the Antrea Controller
+or Agent.
+
+When running out-of-cluster ("controller mode" only), antctl will look for your
+kubeconfig file at `$HOME/.kube/config` by default. You can select a different
+one by setting the `KUBECONFIG` environment variable or with `--kubeconfig`
+(the latter taking precedence over the former).
+
+The following sub-sections introduce a few commands which are useful for
+troubleshooting the Antrea system.
+
+### Showing or changing log verbosity level
+
+Starting from version 0.10.0, Antrea supports showing or changing the log
+verbosity level of Antrea Controller or Antrea Agent using the `antctl log-level`
+command. Starting from version 1.5, Antrea supports showing or changing the
+log verbosity level of the Flow Aggregator using the `antctl log-level` command.
+The command can only run locally inside the `antrea-controller`, `antrea-agent`
+or `flow-aggregator` container.
+
+The following command prints the current log verbosity level:
+
+```bash
+antctl log-level
+```
+
+This command updates the log verbosity level (the `LEVEL` argument must be an
+integer):
+
+```bash
+antctl log-level LEVEL
+```
+
+### Showing feature gates status
+
+The feature gates of Antrea Controller and Agent can be shown using the `antctl get featuregates` command.
+The command can run locally inside the `antrea-controller` or `antrea-agent` container or out-of-cluster,
+when it is running out-of-cluster or in Controller Pod, it will print both Controller and Agent's feature gates list.
+
+The following command prints the current feature gates:
+
+```bash
+antctl get featuregates
+```
+
+### Performing checks to facilitate installation process
+
+Antrea provides a utility command `antctl check` designed to perform checks
+that verify whether a Kubernetes cluster is correctly configured for installing
+Antrea, and also to confirm that Antrea has been installed correctly.
+
+#### Pre-installation checks
+
+Before installing Antrea, it can be helpful to ensure that the Kubernetes
+cluster is configured properly. This can prevent potential issues that might
+arise during the installation of Antrea. To perform these pre-installation
+checks, simply run the command as follows:
+
+```bash
+antctl check cluster
+```
+
+Run the following command to discover more options:
+
+```bash
+antctl check cluster --help
+```
+
+#### Post-installation checks
+
+Once Antrea is installed, you can verify that networking is functioning
+correctly within your cluster. To perform post-installation checks, simply run
+the command as follows:
+
+```bash
+antctl check installation
+```
+
+In case Antrea is installed in a custom namespace, You
+can specify the namespace by adding the flag:
+
+```bash
+antctl check installation --namespace [NAMESPACE]
+```
+
+Run the following command to discover more options:
+
+```bash
+antctl check installation --help
+```
+
+### Collecting support information
+
+Starting with version 0.7.0, Antrea supports the `antctl supportbundle` command,
+which can collect information from the cluster, the Antrea Controller and all
+Antrea agents. This information is useful when trying to troubleshoot issues in
+Kubernetes clusters using Antrea. In particular, when running the command
+out-of-cluster, all the information can be collected under one single directory,
+which you can upload and share when reporting issues on Github. Simply run the
+command as follows:
+
+```bash
+antctl supportbundle [-d TARGET_DIR]
+```
+
+If you omit to provide a directory, antctl will create one in the current
+working directory, using the current timestamp as a suffix. The command also
+provides additional flags to filter the results: run `antctl supportbundle
+--help` for the full list.
+
+The collected support bundle will include the following (more information may be
+included over time):
+
+* cluster information: description of the different K8s resources in the cluster
+ (Nodes, Deployments, etc.).
+* Antrea Controller information: all the available logs (contents will vary
+ based on the verbosity selected when running the controller) and state stored
+ at the controller (e.g. computed NetworkPolicy objects).
+* Antrea Agent information: all the available logs from the agent and the OVS
+ daemons, network configuration of the Node (e.g. routes, iptables rules, OVS
+ flows) and state stored at the agent (e.g. computed NetworkPolicy objects
+ received from the controller).
+
+**Be aware that the generated support bundle includes a lot of information,
+ including logs, so please review the contents of the directory before sharing
+ it on Github and ensure that you do not share anything sensitive.**
+
+The `antctl supportbundle` command can also be run inside a Controller or Agent
+Pod, in which case only local information will be collected.
+
+Since v1.10.0, Antrea also supports collecting information by applying a
+`SupportBundleCollection` CRD, you can refer to the [support bundle guide](./support-bundle-guide.md)
+for more information.
+
+### controllerinfo and agentinfo commands
+
+`antctl` controller command `get controllerinfo` (or `get ci`) and agent command
+`get agentinfo` (or `get ai`) print the runtime information of
+`antrea-controller` and `antrea-agent` respectively.
+
+```bash
+antctl get controllerinfo
+antctl get agentinfo
+```
+
+### NetworkPolicy commands
+
+Both Antrea Controller and Agent support querying the NetworkPolicy objects in the Antrea
+control plane API. The source of a control plane NetworkPolicy is the original policy resource
+(K8s NetworkPolicy, Antrea-native Policy or AdminNetworkPolicy) from which the control plane
+NetworkPolicy was derived.
+
+- `antctl` `get networkpolicy` (or `get netpol`) command can print all
+NetworkPolicies, a specified NetworkPolicy, or NetworkPolicies in a specified
+Namespace.
+- `get appliedtogroup` (or `get atg`) command can print all NetworkPolicy
+AppliedToGroups (AppliedToGroup includes the Pods to which a NetworkPolicy is
+applied), or a specified AppliedToGroup.
+- `get addressgroup` (or `get ag`) command can print all NetworkPolicy
+AddressGroups (AddressGroup defines source or destination addresses of
+NetworkPolicy rules), or a specified AddressGroup.
+
+Using the `json` or `yaml` antctl output format can print more information of
+NetworkPolicy, AppliedToGroup, and AddressGroup, than using the default `table`
+output format. The `NAME` of a control plane NetworkPolicy is the UID of its source
+NetworkPolicy.
+
+```bash
+antctl get networkpolicy [NAME] [-n NAMESPACE] [-T K8sNP|ACNP|ANNP|ANP|BANP] [-o yaml]
+antctl get appliedtogroup [NAME] [-o yaml]
+antctl get addressgroup [NAME] [-o yaml]
+```
+
+NetworkPolicy, AppliedToGroup, and AddressGroup also support `sort-by=''` option,
+which can be used to sort these resources on the basis of a particular field. Any
+valid json path can be passed as flag value. If no value is passed it will use a
+default field to sort results. For NetworkPolicy, the default field is the name of
+the source NetworkPolicy. For AppliedToGroup and AddressGroup, the default field is
+the object name (which is a generated UUID).
+
+```bash
+antctl get networkpolicy --sort-by='.sourceRef.name'
+antctl get appliedtogroup --sort-by='.metadata.name'
+antctl get addressgroup --sort-by='.metadata.name'
+```
+
+NetworkPolicy also supports `sort-by=effectivePriority` option, which can be used to
+view the effective order in which the NetworkPolicies are evaluated. Antrea-native
+NetworkPolicy ordering is documented [here](
+antrea-network-policy.md#antrea-native-policy-ordering-based-on-priorities).
+
+```bash
+antctl get networkpolicy --sort-by=effectivePriority
+```
+
+Antrea Agent supports some extra `antctl` commands.
+
+* Printing NetworkPolicies applied to a specific local Pod.
+
+ ```bash
+ antctl get networkpolicy -p POD -n NAMESPACE
+ ```
+
+* Printing NetworkPolicies with a specific source NetworkPolicy type.
+
+ ```bash
+ antctl get networkpolicy -T (K8sNP|ACNP|ANNP|ANP)
+ ```
+
+* Printing NetworkPolicies with a specific source NetworkPolicy name.
+
+ ```bash
+ antctl get networkpolicy -S SOURCE_NAME [-n NAMESPACE]
+ ```
+
+#### Mapping endpoints to NetworkPolicies
+
+`antctl` supports mapping a specific Pod to the NetworkPolicies which "select"
+this Pod, either because they apply to the Pod directly or because one of their
+policy rules selects the Pod.
+
+```bash
+antctl query endpoint -p POD [-n NAMESPACE]
+```
+
+If no Namespace is provided with `-n`, the command will default to the "default"
+Namespace.
+
+This command only works in "controller mode" and **as of now it can only be run
+from inside the Antrea Controller Pod, and not from out-of-cluster**.
+
+#### Evaluating expected NetworkPolicy behavior
+
+`antctl` supports evaluating all the existing Antrea-native NetworkPolicies,
+Kubernetes NetworkPolicies and AdminNetworkPolicies to predict the effective
+policy rule for traffic between source and destination Pods.
+
+```bash
+antctl query networkpolicyevaluation -S NAMESPACE/POD -D NAMESPACE/POD
+```
+
+If only Pod name is provided, the command will default to the "default" Namespace.
+
+This command only works in "controller mode".
+
+### Dumping Pod network interface information
+
+`antctl` agent command `get podinterface` (or `get pi`) can dump network
+interface information of all local Pods, or a specified local Pod, or local Pods
+in the specified Namespace, or local Pods matching the specified Pod name.
+
+```bash
+antctl get podinterface [NAME] [-n NAMESPACE]
+```
+
+### Dumping OVS flows
+
+Starting from version 0.6.0, Antrea Agent supports dumping Antrea OVS flows. The
+`antctl` `get ovsflows` (or `get of`) command can dump all OVS flows, flows
+added for a specified Pod, or flows added for Service load-balancing of a
+specified Service, or flows added to realize a specified NetworkPolicy, or flows
+in the specified OVS flow tables, or all or the specified OVS groups.
+
+```bash
+antctl get ovsflows
+antctl get ovsflows -p POD -n NAMESPACE
+antctl get ovsflows -S SERVICE -n NAMESPACE
+antctl get ovsflows [-n NAMESPACE] -N NETWORKPOLICY --type NETWORKPOLICY_TYPE
+antctl get ovsflows -T TABLE_A,TABLE_B
+antctl get ovsflows -T TABLE_A,TABLE_B_NUM
+antctl get ovsflows -G all
+antctl get ovsflows -G GROUP_ID1,GROUP_ID2
+```
+
+OVS flow tables can be specified using table names, or the table numbers.
+`antctl get ovsflows --table-names-only` lists all Antrea flow tables. For more information
+about Antrea OVS pipeline and flows, please refer to the [OVS pipeline doc](design/ovs-pipeline.md).
+
+Example outputs of dumping Pod and NetworkPolicy OVS flows:
+
+```bash
+# Dump OVS flows of Pod "coredns-6955765f44-zcbwj"
+$ antctl get of -p coredns-6955765f44-zcbwj -n kube-system
+FLOW
+table=classification, n_packets=513122, n_bytes=42615080, priority=190,in_port="coredns--d0c58e" actions=set_field:0x2/0xffff->reg0,resubmit(,10)
+table=10, n_packets=513122, n_bytes=42615080, priority=200,ip,in_port="coredns--d0c58e",dl_src=52:bd:c6:e0:eb:c1,nw_src=172.100.1.7 actions=resubmit(,30)
+table=10, n_packets=0, n_bytes=0, priority=200,arp,in_port="coredns--d0c58e",arp_spa=172.100.1.7,arp_sha=52:bd:c6:e0:eb:c1 actions=resubmit(,20)
+table=80, n_packets=556468, n_bytes=166477824, priority=200,dl_dst=52:bd:c6:e0:eb:c1 actions=load:0x5->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,resubmit(,90)
+table=70, n_packets=0, n_bytes=0, priority=200,ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7 actions=set_field:62:39:b4:e8:05:76->eth_src,set_field:52:bd:c6:e0:eb:c1->eth_dst,dec_ttl,resubmit(,80)
+
+# Get NetworkPolicies applied to Pod "coredns-6955765f44-zcbwj"
+$ antctl get netpol -p coredns-6955765f44-zcbwj -n kube-system
+NAMESPACE NAME APPLIED-TO RULES
+kube-system kube-dns 160ea6d7-0234-5d1d-8ea0-b703d0aa3b46 1
+
+# Dump OVS flows of NetworkPolicy "kube-dns"
+$ antctl get of -N kube-dns -n kube-system
+FLOW
+table=IngressRule, n_packets=0, n_bytes=0, priority=190,conj_id=1,ip actions=set_field:0x1->reg5,ct(commit,table=IngressMetric,zone=65520,exec(set_field:0x1/0xffffffff->ct_label))
+table=IngressRule, n_packets=0, n_bytes=0, priority=200,ip actions=conjunction(1,1/3)
+table=IngressRule, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=conjunction(2,2/3),conjunction(1,2/3)
+table=IngressRule, n_packets=0, n_bytes=0, priority=200,udp,tp_dst=53 actions=conjunction(1,3/3)
+table=IngressRule, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=53 actions=conjunction(1,3/3)
+table=IngressRule, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=9153 actions=conjunction(1,3/3)
+table=IngressDefaultRule, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=drop
+
+# Dump OVS flows of AntreaNetworkPolicy "test-annp"
+$ antctl get ovsflows -N test-annp -n default --type ANNP
+FLOW
+table=AntreaPolicyIngressRule, n_packets=0, n_bytes=0, priority=14900,conj_id=6 actions=set_field:0x6->reg3,set_field:0x400/0x400->reg0,goto_table:IngressMetric
+table=AntreaPolicyIngressRule, n_packets=0, n_bytes=0, priority=14900,ip,nw_src=10.20.1.8 actions=conjunction(6,1/3)
+table=AntreaPolicyIngressRule, n_packets=0, n_bytes=0, priority=14900,ip,nw_src=10.20.2.8 actions=conjunction(6,1/3)
+table=AntreaPolicyIngressRule, n_packets=0, n_bytes=0, priority=14900,reg1=0x3 actions=conjunction(6,2/3)
+table=AntreaPolicyIngressRule, n_packets=0, n_bytes=0, priority=14900,tcp,tp_dst=443 actions=conjunction(6,3/3)
+```
+
+### OVS packet tracing
+
+Starting from version 0.7.0, Antrea Agent supports tracing the OVS flows that a
+specified packet traverses, leveraging the [OVS packet tracing tool](https://docs.openvswitch.org/en/latest/topics/tracing/).
+
+`antctl trace-packet` command starts a packet tracing operation.
+`antctl help trace-packet` shows the usage of the command. This section lists a
+few trace-packet command examples.
+
+```bash
+# Trace an IP packet between two Pods
+antctl trace-packet -S ns1/pod1 -D ns2/pod2
+# Trace a Service request from a local Pod
+antctl trace-packet -S ns1/pod1 -D ns2/svc2 -f "tcp,tcp_dst=80"
+# Trace the Service reply packet (assuming "ns2/pod2" is the Service backend Pod)
+antctl trace-packet -D ns1/pod1 -S ns2/pod2 -f "tcp,tcp_src=80"
+# Trace an IP packet from a Pod to gateway port
+antctl trace-packet -S ns1/pod1 -D antrea-gw0
+# Trace a UDP packet from a Pod to an IP address
+antctl trace-packet -S ns1/pod1 -D 10.1.2.3 -f udp,udp_dst=1234
+# Trace a UDP packet from an IP address to a Pod
+antctl trace-packet -D ns1/pod1 -S 10.1.2.3 -f udp,udp_src=1234
+# Trace an ARP request from a local Pod
+antctl trace-packet -p ns1/pod1 -f arp,arp_spa=10.1.2.3,arp_sha=00:11:22:33:44:55,arp_tpa=10.1.2.1,dl_dst=ff:ff:ff:ff:ff:ff
+```
+
+Example outputs of tracing a UDP (DNS request) packet from a remote Pod to a
+local (coredns) Pod:
+
+```bash
+$ antctl trace-packet -S default/web-client -D kube-system/coredns-6955765f44-zcbwj -f udp,udp_dst=53
+result: |
+ Flow: udp,in_port=32768,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ 0. in_port=32768, priority 200, cookie 0x5e000000000000
+ load:0->NXM_NX_REG0[0..15]
+ resubmit(,30)
+ 30. ip, priority 200, cookie 0x5e000000000000
+ ct(table=31,zone=65520)
+ drop
+ -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 31.
+ -> Sets the packet to an untracked state, and clears all the conntrack fields.
+
+ Final flow: unchanged
+ Megaflow: recirc_id=0,eth,udp,in_port=32768,nw_frag=no,tp_src=0x0/0xfc00
+ Datapath actions: ct(zone=65520),recirc(0x53)
+
+ ===============================================================================
+ recirc(0x53) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
+ ===============================================================================
+
+ Flow: recirc_id=0x53,ct_state=new|trk,ct_zone=65520,eth,udp,in_port=32768,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ thaw
+ Resuming from table 31
+ 31. priority 0, cookie 0x5e000000000000
+ resubmit(,40)
+ 40. priority 0, cookie 0x5e000000000000
+ resubmit(,50)
+ 50. priority 0, cookie 0x5e000000000000
+ resubmit(,60)
+ 60. priority 0, cookie 0x5e000000000000
+ resubmit(,70)
+ 70. ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7, priority 200, cookie 0x5e030000000000
+ set_field:62:39:b4:e8:05:76->eth_src
+ set_field:52:bd:c6:e0:eb:c1->eth_dst
+ dec_ttl
+ resubmit(,80)
+ 80. dl_dst=52:bd:c6:e0:eb:c1, priority 200, cookie 0x5e030000000000
+ set_field:0x5->reg1
+ set_field:0x10000/0x10000->reg0
+ resubmit(,90)
+ 90. conj_id=2,ip, priority 190, cookie 0x5e050000000000
+ resubmit(,105)
+ 105. ct_state=+new+trk,ip, priority 190, cookie 0x5e000000000000
+ ct(commit,table=110,zone=65520)
+ drop
+ -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 110.
+ -> Sets the packet to an untracked state, and clears all the conntrack fields.
+
+ Final flow: recirc_id=0x53,eth,udp,reg0=0x10000,reg1=0x5,in_port=32768,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
+ Megaflow: recirc_id=0x53,ct_state=+new-est-inv+trk,ct_mark=0,eth,udp,in_port=32768,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=192.0.0.0/2,nw_dst=172.100.1.7,nw_ttl=64,nw_frag=no,tp_dst=53
+ Datapath actions: set(eth(src=62:39:b4:e8:05:76,dst=52:bd:c6:e0:eb:c1)),set(ipv4(ttl=63)),ct(commit,zone=65520),recirc(0x54)
+
+ ===============================================================================
+ recirc(0x54) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
+ ===============================================================================
+
+ Flow: recirc_id=0x54,ct_state=new|trk,ct_zone=65520,eth,udp,reg0=0x10000,reg1=0x5,in_port=32768,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ thaw
+ Resuming from table 110
+ 110. ip,reg0=0x10000/0x10000, priority 200, cookie 0x5e000000000000
+ output:NXM_NX_REG1[]
+ -> output port is 5
+
+ Final flow: unchanged
+ Megaflow: recirc_id=0x54,eth,ip,in_port=32768,nw_frag=no
+ Datapath actions: 3
+```
+
+### Traceflow
+
+`antctl traceflow` (or `antctl tf`) command is used to start a Traceflow and
+retrieve its result. After the result is collected, the Traceflow will be
+deleted. Users can also create a Traceflow with `kubectl`, but `antctl traceflow`
+offers a simpler way. For more information about Traceflow, refer to the
+[Traceflow guide](traceflow-guide.md).
+
+To start a regular Traceflow, both `--source` (or `-S`) and `--destination` (or
+`-D`) arguments must be specified, and the source must be a Pod. For example:
+
+```bash
+$ antctl tf -S busybox0 -D busybox1
+name: busybox0-to-busybox1-fpllngzi
+phase: Succeeded
+source: default/busybox0
+destination: default/busybox1
+results:
+- node: antrea-linux-testbed7-1
+ timestamp: 1596435607
+ observations:
+ - component: SpoofGuard
+ action: Forwarded
+ - component: Forwarding
+ componentInfo: Output
+ action: Delivered
+```
+
+To start a live-traffic Traceflow, add the `--live-traffic` (or `-L`) flag. Add
+the `--dropped-only` flag to indicate only the packet dropped by a NetworkPolicy
+should be captured in the live-traffic Traceflow. A live-traffic Traceflow
+just requires one of `--source` and `--destination` arguments to be specified,
+and at least one of them must be a Pod.
+
+The `--flow` (or `-f`) argument can be used to specify the Traceflow packet
+headers with the [ovs-ofctl](http://www.openvswitch.org//support/dist-docs/ovs-ofctl.8.txt)
+flow syntax. The supported flow fields include: IP family (`ipv6` to indicate an
+IPv6 packet), IP protocol (`icmp`, `icmpv6`, `tcp`, `udp`), source and
+destination ports (`tcp_src`, `tcp_dst`, `udp_src`, `udp_dst`), and TCP flags
+(`tcp_flags`).
+
+By default, the command will wait for the Traceflow to succeed or fail, or
+timeout. The default timeout is 10 seconds, but can be changed with the
+`--timeout` (or `-t`) argument. Add the `--no-wait` flag to start a Traceflow
+without waiting for its results. In this case, the command will not delete the
+Traceflow resource. The `traceflow` command supports yaml and json output.
+
+More examples of `antctl traceflow`:
+
+```bash
+# Start a Traceflow from pod1 to pod2, both Pods are in Namespace default
+$ antctl traceflow -S pod1 -D pod2
+# Start a Traceflow from pod1 in Namepace ns1 to a destination IP
+$ antctl traceflow -S ns1/pod1 -D 123.123.123.123
+# Start a Traceflow from pod1 to Service svc1 in Namespace ns1
+$ antctl traceflow -S pod1 -D ns1/svc1 -f tcp,tcp_dst=80
+# Start a Traceflow from pod1 to pod2, with a UDP packet to destination port 1234
+$ antctl traceflow -S pod1 -D pod2 -f udp,udp_dst=1234
+# Start a Traceflow for live TCP traffic from pod1 to svc1, with 1 minute timeout
+$ antctl traceflow -S pod1 -D svc1 -f tcp --live-traffic -t 1m
+# Start a Traceflow to capture the first dropped TCP packet to pod1 on port 80, within 10 minutes
+$ antctl traceflow -D pod1 -f tcp,tcp_dst=80 --live-traffic --dropped-only -t 10m
+```
+
+### Antctl Proxy
+
+antctl can run as a reverse proxy for the Antrea API (Controller or arbitrary
+Agent). Usage is very similar to `kubectl proxy` and the implementation is
+essentially the same.
+
+To run a reverse proxy for the Antrea Controller API, use:
+
+```bash
+antctl proxy --controller
+````
+
+To run a reverse proxy for the Antrea Agent API for the antrea-agent Pod running
+on Node , use:
+
+```bash
+antctl proxy --agent-node
+```
+
+You can then access the API at `127.0.0.1:8001`. To implement this
+functionality, antctl retrieves the Node IP address and API server port for the
+Antrea Controller or for the specified Agent from the K8s API, and it proxies
+all the requests received on `127.0.0.1:8001` directly to that IP / port. One
+thing to keep in mind is that the TLS connection between the proxy and the
+Antrea Agent or Controller will not be secure (no certificate verification), and
+the proxy should be used for debugging only.
+
+To see the full list of supported options, run `antctl proxy --help`.
+
+This feature is useful if one wants to use the Go
+[pprof](https://golang.org/pkg/net/http/pprof/) tool to collect runtime
+profiling data about the Antrea components. Please refer to this
+[document](troubleshooting.md#profiling-antrea-components) for more information.
+
+### Flow Aggregator commands
+
+antctl supports dumping the flow records handled by the Flow Aggregator, and
+printing metrics about flow record processing. These commands are only available
+when you exec into the Flow Aggregator Pod.
+
+#### Dumping flow records
+
+antctl supports dumping flow records stored in the Flow Aggregator. The
+`antctl get flowrecords` command can dump all matching flow records. It supports
+the 5-tuple flow key or a subset of the 5-tuple as a filter. A 5-tuple flow key
+contains Source IP, Destination IP, Source Port, Destination Port and Transport
+Protocol. If the filter is empty, all flow records will be dumped.
+
+The command provides a compact display of the flow records in the default table
+output format, which contains the flow key, source pod name, destination pod name,
+source pod namespace, destination pod namespace and destination service name for
+each flow record. Using the `json` or `yaml` antctl output format will include
+output flow record information in a structured format, and will include more
+information about each flow record. `antctl get flowrecords --help` shows the
+usage of the command. This section lists a few dumping flow records command
+examples.
+
+```bash
+# Get the list of all flow records
+antctl get flowrecords
+# Get the list of flow records with a complete filter and output in json format
+antctl get flowrecords --srcip 10.0.0.1 --dstip 10.0.0.2 --proto 6 --srcport 1234 --dstport 5678 -o json
+# Get the list of flow records with a partial filter, e.g. source address and source port
+antctl get flowrecords --srcip 10.0.0.1 --srcport 1234
+```
+
+Example outputs of dumping flow records:
+
+```bash
+$ antctl get flowrecords --srcip 10.10.1.4 --dstip 10.10.0.2
+SRC_IP DST_IP SPORT DPORT PROTO SRC_POD DST_POD SRC_NS DST_NS SERVICE
+10.10.1.4 10.10.0.2 38581 53 17 flow-aggregator-67dc8ddfc8-zx8sg coredns-78fcd69978-7vc6k flow-aggregator kube-system kube-system/kube-dns:dns
+10.10.1.4 10.10.0.2 56505 53 17 flow-aggregator-67dc8ddfc8-zx8sg coredns-78fcd69978-7vc6k flow-aggregator kube-system kube-system/kube-dns:dns
+
+$ antctl get flowrecords --srcip 10.10.0.1 --srcport 50497 -o json
+[
+ {
+ "destinationClusterIPv4": "0.0.0.0",
+ "destinationIPv4Address": "10.10.1.2",
+ "destinationNodeName": "k8s-node-worker-1",
+ "destinationPodName": "coredns-78fcd69978-x2twv",
+ "destinationPodNamespace": "kube-system",
+ "destinationServicePort": 0,
+ "destinationServicePortName": "",
+ "destinationTransportPort": 53,
+ "egressNetworkPolicyName": "",
+ "egressNetworkPolicyNamespace": "",
+ "egressNetworkPolicyRuleAction": 0,
+ "egressNetworkPolicyRuleName": "",
+ "egressNetworkPolicyType": 0,
+ "flowEndReason": 3,
+ "flowEndSeconds": 1635546893,
+ "flowStartSeconds": 1635546867,
+ "flowType": 2,
+ "ingressNetworkPolicyName": "",
+ "ingressNetworkPolicyNamespace": "",
+ "ingressNetworkPolicyRuleAction": 0,
+ "ingressNetworkPolicyRuleName": "",
+ "ingressNetworkPolicyType": 0,
+ "octetDeltaCount": 99,
+ "octetDeltaCountFromDestinationNode": 99,
+ "octetDeltaCountFromSourceNode": 0,
+ "octetTotalCount": 99,
+ "octetTotalCountFromDestinationNode": 99,
+ "octetTotalCountFromSourceNode": 0,
+ "packetDeltaCount": 1,
+ "packetDeltaCountFromDestinationNode": 1,
+ "packetDeltaCountFromSourceNode": 0,
+ "packetTotalCount": 1,
+ "packetTotalCountFromDestinationNode": 1,
+ "packetTotalCountFromSourceNode": 0,
+ "protocolIdentifier": 17,
+ "reverseOctetDeltaCount": 192,
+ "reverseOctetDeltaCountFromDestinationNode": 192,
+ "reverseOctetDeltaCountFromSourceNode": 0,
+ "reverseOctetTotalCount": 192,
+ "reverseOctetTotalCountFromDestinationNode": 192,
+ "reverseOctetTotalCountFromSourceNode": 0,
+ "reversePacketDeltaCount": 1,
+ "reversePacketDeltaCountFromDestinationNode": 1,
+ "reversePacketDeltaCountFromSourceNode": 0,
+ "reversePacketTotalCount": 1,
+ "reversePacketTotalCountFromDestinationNode": 1,
+ "reversePacketTotalCountFromSourceNode": 0,
+ "sourceIPv4Address": "10.10.0.1",
+ "sourceNodeName": "",
+ "sourcePodName": "",
+ "sourcePodNamespace": "",
+ "sourceTransportPort": 50497,
+ "tcpState": ""
+ }
+]
+```
+
+#### Record metrics
+
+Flow Aggregator supports printing record metrics. The `antctl get recordmetrics`
+command can print all metrics related to the Flow Aggregator. The metrics include
+the following:
+
+* number of records received by the collector process in the Flow Aggregator
+* number of records exported by the Flow Aggregator
+* number of active flows that are being tracked
+* number of exporters connected to the Flow Aggregator
+
+Example outputs of record metrics:
+
+```bash
+RECORDS-EXPORTED RECORDS-RECEIVED FLOWS EXPORTERS-CONNECTED
+46 118 7 2
+```
+
+### Multi-cluster commands
+
+For information about Antrea Multi-cluster commands, please refer to the
+[antctl Multi-cluster commands](./multicluster/antctl.md).
+
+### Multicast commands
+
+The `antctl get podmulticaststats [POD_NAME] [-n NAMESPACE]` command prints inbound
+and outbound multicast statistics for each Pod. Note that IGMP packets are not counted.
+
+Example output of podmulticaststats:
+
+```bash
+$ antctl get podmulticaststats
+
+NAMESPACE NAME INBOUND OUTBOUND
+testmulticast-vw7gx5b9 test3-receiver-2 30 0
+testmulticast-vw7gx5b9 test3-sender-1 0 10
+```
+
+### Showing memberlist state
+
+`antctl` agent command `get memberlist` (or `get ml`) prints the state of memberlist
+cluster of Antrea Agent.
+
+```bash
+$ antctl get memberlist
+
+NODE IP STATUS
+worker1 172.18.0.4 Alive
+worker2 172.18.0.3 Alive
+worker3 172.18.0.2 Dead
+```
+
+### BGP commands
+
+`antctl` agent command `get bgppolicy` prints the effective BGP policy applied on the local Node.
+It includes the name, local ASN, router ID and listen port of the effective BGP policy.
+
+```bash
+$ antctl get bgppolicy
+
+NAME ROUTER-ID LOCAL-ASN LISTEN-PORT
+example-bgp-policy 172.18.0.2 64512 179
+```
+
+`antctl` agent command `get bgppeers` print the current status of all BGP peers
+of effective BGP policy applied on the local Node. It includes Peer IP address with port,
+ASN, and State of the BGP Peers.
+
+```bash
+# Get the list of all bgp peers
+$ antctl get bgppeers
+
+PEER ASN STATE
+192.168.77.200:179 65001 Established
+[fec0::196:168:77:251]:179 65002 Active
+
+# Get the list of IPv4 bgp peers only
+$ antctl get bgppeers --ipv4-only
+
+PEER ASN STATE
+192.168.77.200:179 65001 Established
+192.168.77.201:179 65002 Active
+
+# Get the list of IPv6 bgp peers only
+$ antctl get bgppeers --ipv6-only
+
+PEER ASN STATE
+[fec0::196:168:77:251]:179 65001 Established
+[fec0::196:168:77:252]:179 65002 Active
+```
+
+`antctl` agent command `get bgproutes` prints the advertised BGP routes on the local Node.
+For more information about route advertisement, please refer to [Advertisements](./bgp-policy.md#advertisements).
+
+```bash
+# Get the list of all advertised bgp routes
+$ antctl get bgproutes
+
+ROUTE
+10.96.10.10/32
+192.168.77.100/32
+fec0::10:96:10:10/128
+fec0::192:168:77:100/128
+
+# Get the list of advertised IPv4 bgp routes
+$ antctl get bgproutes --ipv4-only
+
+ROUTE
+10.96.10.10/32
+192.168.77.100/32
+
+# Get the list of advertised IPv6 bgp routes
+$ antctl get bgproutes --ipv6-only
+
+ROUTE
+fec0::10:96:10:10/128
+fec0::192:168:77:100/128
+```
+
+### Upgrade existing objects of CRDs
+
+antctl supports upgrading existing objects of Antrea CRDs to the storage version.
+The related sub-commands should be run out-of-cluster. Please ensure that the
+kubeconfig file used by antctl has the necessary permissions. The required permissions
+are listed in the following sample ClusterRole.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: antctl
+rules:
+ - apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions
+ verbs:
+ - get
+ - list
+ - apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions/status
+ verbs:
+ - update
+ - apiGroups:
+ - crd.antrea.io
+ resources:
+ - "*"
+ verbs:
+ - get
+ - list
+ - update
+```
+
+This command performs a dry-run to upgrade all existing objects of Antrea CRDs to
+the storage version:
+
+```bash
+antctl upgrade api-storage --dry-run
+```
+
+This command upgrades all existing objects of Antrea CRDs to the storage version:
+
+```bash
+antctl upgrade api-storage
+```
+
+This command upgrades existing AntreaAgentInfo objects to the storage version:
+
+```bash
+antctl upgrade api-storage --crds=antreaagentinfos.crd.antrea.io
+```
+
+This command upgrades existing Egress and Group objects to the storage version:
+
+```bash
+antctl upgrade api-storage --crds=egresses.crd.antrea.io,groups.crd.antrea.io
+```
+
+If you encounter any errors related to permissions while running the commands, double-check
+the permissions of the kubeconfig used by antctl. Ensure that the ClusterRole has the
+required permissions. The following sample errors are caused by insufficient permissions:
+
+```bash
+Error: failed to get CRD list: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
+
+Error: externalippools.crd.antrea.io is forbidden: User "user" cannot list resource "externalippools" in API group "crd.antrea.io" at the cluster scope
+
+Error: error upgrading object prod-external-ip-pool of CRD "externalippools.crd.antrea.io": externalippools.crd.antrea.io "prod-external-ip-pool" is forbidden: User "user" cannot update resource "externalippools" in API group "crd.antrea.io" at the cluster scope
+
+Error: error updating CRD "externalippools.crd.antrea.io" status.storedVersion: customresourcedefinitions.apiextensions.k8s.io "externalippools.crd.antrea.io" is forbidden: User "user" cannot update resource "customresourcedefinitions/status" in API group "apiextensions.k8s.io" at the cluster scope
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/antrea-agent-simulator.md b/content/docs/v2.2.0-alpha.2/docs/antrea-agent-simulator.md
new file mode 100644
index 00000000..d2d265ea
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/antrea-agent-simulator.md
@@ -0,0 +1,53 @@
+# Run Antrea agent simulator
+
+This document describes how to run the Antrea agent simulator. The simulator is
+useful for Antrea scalability testing, without having to create a very large
+cluster.
+
+## Build the images
+
+```bash
+make build-scale-simulator
+```
+
+## Create the yaml file
+
+This demo uses 1 simulator, this command will create a yaml file
+build/yamls/antrea-scale.yml
+
+```bash
+make manifest-scale
+```
+
+The above yaml will create one simulated Node/Pod, to change the number of
+instances, you can modify `spec.replicas` of the StatefulSet
+`antrea-agent-simulator` in the yaml, or scale it via
+`kubectl scale statefulset/antrea-agent-simulator -n kube-system --replicas=`
+after deploying it.
+
+## Taint the simulator node
+
+To prevent Pods from being scheduled on the simulated Node(s), you can use the
+following taint.
+
+```bash
+kubectl taint -l 'antrea/instance=simulator' node mocknode=true:NoExecute
+```
+
+## Create secret for kubemark
+
+```bash
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kube-system --from-file=admin.conf=
+```
+
+## Apply the yaml file
+
+```bash
+kubectl apply -f build/yamls/antrea-scale.yml
+```
+
+check the simulated Node:
+
+ ```bash
+kubectl get nodes -l 'antrea/instance=simulator'
+ ```
diff --git a/content/docs/v2.2.0-alpha.2/docs/antrea-ipam.md b/content/docs/v2.2.0-alpha.2/docs/antrea-ipam.md
new file mode 100644
index 00000000..41baa073
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/antrea-ipam.md
@@ -0,0 +1,519 @@
+# Antrea IPAM Capabilities
+
+
+* [Antrea IPAM Capabilities](#antrea-ipam-capabilities)
+ * [Running NodeIPAM within Antrea Controller](#running-nodeipam-within-antrea-controller)
+ * [Configuration](#configuration)
+ * [Antrea Flexible IPAM](#antrea-flexible-ipam)
+ * [Usage](#usage)
+ * [Enable AntreaIPAM feature gate and bridging mode](#enable-antreaipam-feature-gate-and-bridging-mode)
+ * [Create IPPool CR](#create-ippool-cr)
+ * [IPPool Annotations on Namespace](#ippool-annotations-on-namespace)
+ * [IPPool Annotations on Pod (available since Antrea 1.5)](#ippool-annotations-on-pod-available-since-antrea-15)
+ * [Persistent IP for StatefulSet Pod (available since Antrea 1.5)](#persistent-ip-for-statefulset-pod-available-since-antrea-15)
+ * [Data path behaviors](#data-path-behaviors)
+ * [Requirements for this Feature](#requirements-for-this-feature)
+ * [Flexible IPAM design](#flexible-ipam-design)
+ * [On IPPool CR create/update event](#on-ippool-cr-createupdate-event)
+ * [On StatefulSet create event](#on-statefulset-create-event)
+ * [On StatefulSet delete event](#on-statefulset-delete-event)
+ * [On Pod create](#on-pod-create)
+ * [On Pod delete](#on-pod-delete)
+ * [IPAM for Secondary Network](#ipam-for-secondary-network)
+ * [Prerequisites](#prerequisites)
+ * [CNI IPAM configuration](#cni-ipam-configuration)
+ * [Configuration with `NetworkAttachmentDefinition` CRD](#configuration-with-networkattachmentdefinition-crd)
+ * [`IPPool` CRD](#ippool-crd)
+
+
+## Running NodeIPAM within Antrea Controller
+
+NodeIPAM is a Kubernetes component, which manages IP address pool allocation per
+each Node, when the Node initializes.
+
+On single stack deployments, NodeIPAM allocates a single IPv4 or IPv6 CIDR per
+Node, while in dual stack deployments, NodeIPAM allocates two CIDRs per each
+Node: one for each IP family.
+
+NodeIPAM is configured with a CIDR per each family, which it slices into smaller
+per-Node CIDRs. When a Node is initialized, the CIDRs are set to the podCIDRs
+attribute of the Node spec.
+
+Antrea NodeIPAM controller can be executed in scenarios where the
+NodeIPAMController is disabled in kube-controller-manager.
+
+Note that running Antrea NodeIPAM while NodeIPAMController runs within
+kube-controller-manager would cause conflicts and result in an unstable
+behavior.
+
+### Configuration
+
+Antrea Controller NodeIPAM configuration items are grouped under `nodeIPAM`
+dictionary key.
+
+NodeIPAM dictionary contains the following items:
+
+- `enableNodeIPAM`: Enable the integrated NodeIPAM controller within the Antrea
+controller. Default is false.
+
+- `clusterCIDRs`: CIDR ranges for Pods in cluster. String array containing single
+CIDR range, or multiple ranges. The CIDRs could be either IPv4 or IPv6. At most
+one CIDR may be specified for each IP family. Example values:
+`[172.100.0.0/16]`, `[172.100.0.0/20, fd00:172:100::/60]`.
+
+- `serviceCIDR`: CIDR range for IPv4 Services in cluster. It is not necessary to
+specify it when there is no overlap with clusterCIDRs.
+
+- `serviceCIDRv6`: CIDR range for IPv6 Services in cluster. It is not necessary to
+ specify it when there is no overlap with clusterCIDRs.
+
+- `nodeCIDRMaskSizeIPv4`: Mask size for IPv4 Node CIDR in IPv4 or dual-stack
+cluster. Valid range is 16 to 30. Default is 24.
+
+- `nodeCIDRMaskSizeIPv6`: Mask size for IPv6 Node CIDR in IPv6 or dual-stack
+cluster. Valid range is 64 to 126. Default is 64.
+
+Below is a sample of needed changes in the Antrea deployment YAML:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ nodeIPAM:
+ enableNodeIPAM: true
+ clusterCIDRs: [172.100.0.0/16]
+```
+
+When running Antrea NodeIPAM in a particular version or scenario, you may need to
+be aware of the following:
+
+* Prior to v1.12, a feature gate, `NodeIPAM` must also be enabled for
+ `antrea-controller`.
+* Prior to v1.13, running Antrea NodeIPAM without kube-proxy is not supported.
+ Starting with v1.13, the `kubeAPIServerOverride` option in the `antrea-controller`
+ configuration must be set to the address of Kubernetes apiserver when kube-proxy
+ is not deployed.
+
+## Antrea Flexible IPAM
+
+Antrea supports flexible control over Pod IP addressing since version 1.4. Pod
+IP addresses can be allocated from an `IPPool`. When a Pod's IP is allocated
+from an IPPool, the traffic from the Pod to Pods on another Node or from the Pod to
+external network will be sent to the underlay network through the Node's transport
+network interface, and will be forwarded/routed by the underlay network. We also
+call this forwarding mode `bridging mode`.
+
+`IPPool` CRD defines a desired set of IP ranges and VLANs. An `IPPool` can be annotated
+to Namespace, Pod and PodTemplate of StatefulSet/Deployment. Then Antrea will
+manage IP address assignment for corresponding Pods according to `IPPool` spec.
+Note that the IP pool annotation cannot be updated or deleted without recreating
+the resource. An `IPPool` can be extended, but cannot be shrunk if already
+assigned to a resource. The IP ranges of IPPools must not overlap, otherwise it
+would lead to undefined behavior.
+
+Regular `Subnet per Node` IPAM will continue to be used for resources without the
+IPPool annotation, or when the `AntreaIPAM` feature is disabled.
+
+### Usage
+
+#### Enable AntreaIPAM feature gate and bridging mode
+
+To enable flexible IPAM, you need to enable the `AntreaIPAM` feature gate for
+both `antrea-controller` and `antrea-agent`, and set the `enableBridgingMode`
+configuration parameter of `antrea-agent` to `true`.
+
+When Antrea is installed from YAML, the needed changes in the Antrea
+ConfigMap `antrea-config` YAML are as below:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ AntreaIPAM: true
+ antrea-agent.conf: |
+ featureGates:
+ AntreaIPAM: true
+ enableBridgingMode: true
+ trafficEncapMode: "noEncap"
+ noSNAT: true
+```
+
+Alternatively, you can use the following helm install/upgrade command to configure
+the above options:
+
+ ```bash
+ helm upgrade --install antrea antrea/antrea --namespace kube-system --set
+enableBridgingMode=true,featureGates.AntreaIPAM=true,trafficEncapMode=noEncap,noSNAT=true
+ ```
+
+#### Create IPPool CR
+
+The following example YAML manifest creates an IPPool CR.
+
+```yaml
+apiVersion: "crd.antrea.io/v1beta1"
+kind: IPPool
+metadata:
+ name: pool1
+spec:
+ ipRanges:
+ - start: "10.2.0.12"
+ end: "10.2.0.20"
+ subnetInfo:
+ gateway: "10.2.0.1"
+ prefixLength: 24
+ vlan: 2 # Default is 0 (untagged). Valid value is 0~4094.
+```
+
+#### IPPool Annotations on Namespace
+
+The following example YAML manifest creates a Namespace to allocate Pod IPs from the IP pool.
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: namespace1
+ annotations:
+ ipam.antrea.io/ippools: 'pool1'
+```
+
+#### IPPool Annotations on Pod (available since Antrea 1.5)
+
+Since Antrea v1.5.0, Pod IPPool annotation is supported and has a higher
+priority than the Namespace IPPool annotation. This annotation can be added to
+`PodTemplate` of a controller resource such as StatefulSet and Deployment.
+
+Pod IP annotation is supported for a single Pod to specify a fixed IP for the Pod.
+
+Examples of annotations on a Pod or PodTemplate:
+
+```yaml
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: statefulset1
+spec:
+ replicas: 1 # Do not increase replicas if there is pod-ips annotation in PodTemplate
+ template:
+ metadata:
+ annotations:
+ ipam.antrea.io/ippools: 'sts-ip-pool1' # This annotation will be set automatically on all Pods managed by this resource
+ ipam.antrea.io/pod-ips: ''
+```
+
+```yaml
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: statefulset1
+spec:
+ replicas: 4
+ template:
+ metadata:
+ annotations:
+ ipam.antrea.io/ippools: 'sts-ip-pool1' # This annotation will be set automatically on all Pods managed by this resource
+ # Do not add pod-ips annotation to PodTemplate if there is more than 1 replica
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/ippools: 'pod-ip-pool1'
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/ippools: 'pod-ip-pool1'
+ ipam.antrea.io/pod-ips: ''
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/pod-ips: ''
+```
+
+#### Persistent IP for StatefulSet Pod (available since Antrea 1.5)
+
+A StatefulSet Pod's IP will be kept after Pod restarts, when the IP is allocated from the
+annotated IPPool.
+
+### Data path behaviors
+
+When `AntreaIPAM` is enabled, `antrea-agent` will connect the Node's network interface
+to the OVS bridge at startup, and it will detach the interface from the OVS bridge and
+restore its configurations at exit. Node may lose network connection when `antrea-agent`
+or OVS daemons are stopped unexpectedly, which can be recovered by rebooting the Node.
+`AntreaIPAM` Pods' traffic will not be routed by local Node's network stack.
+
+Traffic from `AntreaIPAM` Pods without VLAN, regular `Subnet per Node` IPAM Pods, and K8s
+Nodes is recognized as VLAN 0 (untagged).
+
+Traffic to a local Pod in the Pod's VLAN will be sent to the Pod's OVS port directly,
+after the destination MAC is rewritten to the Pod's MAC address. This includes
+`AntreaIPAM` Pods and regular `Subnet per Node` IPAM Pods, even when they are not in the
+same subnet. Traffic to a Pod in different VLAN will be sent to the underlay network,
+where the underlay router will route the traffic to the destination VLAN.
+
+### Requirements for this Feature
+
+As of now, this feature is supported on Linux Nodes, with IPv4, `system` OVS datapath
+type, `noEncap`, `noSNAT` traffic mode, and Antrea Proxy enabled. Configuration
+with `proxyAll` enabled is not verified.
+
+The IPs in the `IPPools` without VLAN must be in the same underlay subnet as the Node
+IP, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network.
+`IPPools` with VLAN must not overlap with other network subnets, and the underlay network
+router should provide the network connectivity for these VLANs. Only a single IP pool can
+be included in the Namespace annotation. In the future, annotation of up to two pools for
+IPv4 and IPv6 respectively will be supported.
+
+### Flexible IPAM design
+
+When the `AntreaIPAM` feature gate is enabled, `antrea-controller` will watch IPPool CRs and
+StatefulSets from `kube-apiserver`.
+
+#### On IPPool CR create/update event
+
+`antrea-controller` will update IPPool counters, and periodically clean up stale IP addresses.
+
+#### On StatefulSet create event
+
+`antrea-controller` will check the Antrea IPAM annotations on the StatefullSet, and preallocate
+IPs from the specified IPPool for the StatefullSet Pods
+
+#### On StatefulSet delete event
+
+`antrea-controller` will clean up IP allocations for this StatefulSet.
+
+#### On Pod create
+
+`antrea-agent` will receive a CNI add request, and it will then check the Antrea IPAM annotations
+and allocate an IP for the Pod, which can be a pre-allocated IP StatefulSet IP, a user-specified
+IP, or the next available IP in the specified IPPool.
+
+#### On Pod delete
+
+`antrea-agent` will receive a CNI del request and release the IP allocation from the IPPool.
+If the IP is a pre-allocated StatefulSet IP, it will stay in the pre-allocated status thus the Pod
+will get same IP after recreated.
+
+## IPAM for Secondary Network
+
+With the AntreaIPAM feature, Antrea can allocate IPs for Pod secondary networks,
+including both [secondary networks managed by Antrea](secondary-network.md) and
+secondary networks managed by [Multus](cookbooks/multus).
+
+### Prerequisites
+
+The IPAM capability for secondary network was added in Antrea version 1.7. It
+requires the `AntreaIPAM` feature gate to be enabled on both `antrea-controller`
+and `antrea-agent`, as `AntreaIPAM` is still an alpha feature at this moment and
+is not enabled by default.
+
+### CNI IPAM configuration
+
+To configure Antrea IPAM, `antrea` should be specified as the IPAM plugin in the
+the CNI IPAM configuration, and at least one Antrea IPPool should be specified
+in the `ippools` field. IPs will be allocated from the specified IPPool(s) for
+the secondary network.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "ipv4-net-1",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ]
+ }
+}
+```
+
+Multiple IPPools can be specified to allocate multiple IPs from each IPPool for
+the secondary network. For example, you can specify one IPPool to allocate an
+IPv4 address and another IPPool to allocate an IPv6 address in the dual-stack
+case.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "dual-stack-net-1",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1", "ipv6-pool-1" ]
+ }
+}
+```
+
+Additionally, Antrea IPAM also supports the same configuration of static IP
+addresses, static routes, and DNS settings, as what is supported by the
+[static IPAM plugin](https://www.cni.dev/plugins/current/ipam/static). The
+following example requests an IP from an IPPool and also specifies two
+additional static IP addresses. It also includes static routes and DNS settings.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "pool-and-static-net-1",
+ "type": "bridge",
+ "bridge": "br0",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ],
+ "addresses": [
+ {
+ "address": "10.10.0.1/24",
+ "gateway": "10.10.0.254"
+ },
+ {
+ "address": "3ffe:ffff:0:01ff::1/64",
+ "gateway": "3ffe:ffff:0::1"
+ }
+ ],
+ "routes": [
+ { "dst": "0.0.0.0/0" },
+ { "dst": "192.168.0.0/16", "gw": "10.10.5.1" },
+ { "dst": "3ffe:ffff:0:01ff::1/64" }
+ ],
+ "dns": {
+ "nameservers" : ["8.8.8.8"],
+ "domain": "example.com",
+ "search": [ "example.com" ]
+ }
+ }
+}
+```
+
+The CNI IPAM configuration can include only static addresses without IPPools, if
+only static IP addresses are needed.
+
+### Configuration with `NetworkAttachmentDefinition` CRD
+
+CNI and IPAM configuration of a secondary network is typically defined with the
+`NetworkAttachmentDefinition` CRD. For example:
+
+```yaml
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: ipv4-net-1
+spec:
+ {
+ "cniVersion": "0.3.0",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ]
+ }
+ }
+```
+
+## `IPPool` CRD
+
+Antrea IP pools are defined with the `IPPool` CRD. The following two examples
+define an IPv4 and an IPv6 IP pool respectively. The first example (IPv4) uses a
+CIDR to define the range of allocatable IPs, while the second example uses a
+"range", with a start and end IP address. When using a CIDR, it is important to
+keep in mind that the first IP in the CIDR will be excluded and will never be
+allocated. When the CIDR represents a traditional subnet, the first IP is
+typically the "network IP". Additionally, for IPv4, when the `prefixLength`
+matches the CIDR mask size, the last IP in the CIDR, which traditionally
+represents the "broadcast IP", will also be excluded. The provided gateway IP
+will of course always be excluded. On the other hand, when using a range with a
+start and end IP address, both of these IPs will be allocatable (except if one
+of them corresponds to the gateway).
+
+```yaml
+apiVersion: "crd.antrea.io/v1beta1"
+kind: IPPool
+metadata:
+ name: ipv4-pool-1
+spec:
+ ipRanges:
+ # 61 different IPs can be allocated from this pool: 64 (2^6) - 3 (network IP, broadcast IP, gateway IP).
+ - cidr: "10.10.1.0/26"
+ subnetInfo:
+ gateway: "10.10.1.1"
+ prefixLength: 26
+```
+
+```yaml
+apiVersion: "crd.antrea.io/v1beta1"
+kind: IPPool
+metadata:
+ name: ipv6-pool-1
+spec:
+ ipRanges:
+ # 257 different IPs can be allocated from this pool: 0x200 - 0x100 + 1.
+ - start: "3ffe:ffff:1:01ff::0100"
+ end: "3ffe:ffff:1:01ff::0200"
+ subnetInfo:
+ gateway: "3ffe:ffff:1:01ff::1"
+ prefixLength: 64
+```
+
+When used for Antrea secondary VLAN network, the VLAN set in an `IPPool` IP
+range will be passed to the VLAN interface configuration. For example:
+
+```yaml
+apiVersion: "crd.antrea.io/v1beta1"
+kind: IPPool
+metadata:
+ name: ipv4-pool-1
+spec:
+ ipRanges:
+ - cidr: "10.10.1.0/26"
+ subnetInfo:
+ gateway: "10.10.1.1"
+ prefixLength: 24
+ vlan: 100
+
+---
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: ipv4-net-1
+spec:
+ {
+ "cniVersion": "0.3.0",
+ "type": "antrea",
+ "networkType": "vlan",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ]
+ }
+ }
+```
+
+You can refer to the [Antrea secondary network document](secondary-network.md)
+for more information about Antrea secondary VLAN network configuration.
+
+For other network types, the VLAN field in the `IPPool` will be ignored.
diff --git a/content/docs/v2.2.0-alpha.2/docs/antrea-l7-network-policy.md b/content/docs/v2.2.0-alpha.2/docs/antrea-l7-network-policy.md
new file mode 100644
index 00000000..62fecd80
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/antrea-l7-network-policy.md
@@ -0,0 +1,404 @@
+# Antrea Layer 7 NetworkPolicy
+
+## Table of Contents
+
+
+- [Introduction](#introduction)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [HTTP](#http)
+ - [More examples](#more-examples)
+ - [TLS](#tls)
+ - [More examples](#more-examples-1)
+ - [Logs](#logs)
+- [Limitations](#limitations)
+
+
+## Introduction
+
+NetworkPolicy was initially used to restrict network access at layer 3 (Network) and 4 (Transport) in the OSI model,
+based on IP address, transport protocol, and port. Securing applications at IP and port level provides limited security
+capabilities, as the service an application provides is either entirely exposed to a client or not accessible by that
+client at all. Starting with v1.10, Antrea introduces support for layer 7 NetworkPolicy, an application-aware policy
+which provides fine-grained control over the network traffic beyond IP, transport protocol, and port. It enables users
+to protect their applications by specifying how they are allowed to communicate with others, taking into account
+application context. For example, you can enforce policies to:
+
+- Grant access of privileged URLs to specific clients while make other URLs publicly accessible.
+- Prevent applications from accessing unauthorized domains.
+- Block network traffic using an unauthorized application protocol regardless of port used.
+
+This guide demonstrates how to configure layer 7 NetworkPolicy.
+
+## Prerequisites
+
+Layer 7 NetworkPolicy was introduced in v1.10 as an alpha feature and is disabled by default. A feature gate,
+`L7NetworkPolicy`, must be enabled in antrea-controller.conf and antrea-agent.conf in the `antrea-config` ConfigMap.
+Additionally, due to the constraint of the application detection engine, TX checksum offloading must be disabled via the
+`disableTXChecksumOffload` option in antrea-agent.conf for the feature to work. An example configuration is as below:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ disableTXChecksumOffload: true
+ featureGates:
+ L7NetworkPolicy: true
+ antrea-controller.conf: |
+ featureGates:
+ L7NetworkPolicy: true
+```
+
+Alternatively, you can use the following helm installation command to configure the above options:
+
+```bash
+helm install antrea antrea/antrea --namespace kube-system --set featureGates.L7NetworkPolicy=true,disableTXChecksumOffload=true
+```
+
+## Usage
+
+There isn't a separate resource type for layer 7 NetworkPolicy. It is one kind of Antrea-native policies, which has the
+`l7Protocols` field specified in the rules. Like layer 3 and layer 4 policies, the `l7Protocols` field can be specified
+for ingress and egress rules in Antrea ClusterNetworkPolicy and Antrea NetworkPolicy. It can be used with the `from` or
+`to` field to select the network peer, and the `ports` to select the transport protocol and/or port for which the layer
+7 rule applies to. The `action` of a layer 7 rule can only be `Allow`.
+
+**Note**: Any traffic matching the layer 3/4 criteria (specified by `from`, `to`, and `port`) of a layer 7 rule will be
+forwarded to an application-aware engine for protocol detection and rule enforcement, and the traffic will be allowed if
+the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any rules after a layer 7 rule will not
+be enforced for the traffic that match the layer 7 rule's layer 3/4 criteria.
+
+As of now, the only supported layer 7 protocol is HTTP. Support for more protocols may be added in the future and we
+welcome feature requests for protocols that you are interested in.
+
+### HTTP
+
+An example layer 7 NetworkPolicy for the HTTP protocol is like below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-http-request-to-api-v2
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-http # Allow inbound HTTP GET requests to "/api/v2" from Pods with label "app=client".
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - http:
+ path: "/api/v2/*"
+ host: "foo.bar.com"
+ method: "GET"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without label "app=client" or from external clients).
+ action: Drop
+```
+
+**path**: The `path` field represents the URI path to match. Both exact matches and wildcards are supported, e.g.
+`/api/v2/*`, `*/v2/*`, `/index.html`. If not set, the rule matches all URI paths.
+
+**host**: The `host` field represents the hostname present in the URI or the HTTP Host header to match. It does not
+contain the port associated with the host. Both exact matches and wildcards are supported, e.g. `*.foo.com`, `*.foo.*`,
+`foo.bar.com`. If not set, the rule matches all hostnames.
+
+**method**: The `method` field represents the HTTP method to match. It could be GET, POST, PUT, HEAD, DELETE, TRACE,
+OPTIONS, CONNECT and PATCH. If not set, the rule matches all methods.
+
+#### More examples
+
+The following NetworkPolicy grants access of privileged URLs to specific clients while making other URLs publicly
+accessible:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-privileged-url-to-admin-role
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: for-admin # Allow inbound HTTP GET requests to "/admin" and "/public" from Pods with label "role=admin".
+ action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: admin
+ l7Protocols:
+ - http:
+ path: "/admin/*"
+ - http:
+ path: "/public/*"
+ - name: for-public # Allow inbound HTTP GET requests to "/public" from everyone.
+ action: Allow # All other inbound traffic will be automatically dropped.
+ l7Protocols:
+ - http:
+ path: "/public/*"
+```
+
+The following NetworkPolicy prevents applications from accessing unauthorized domains:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-web-access-to-internal-domain
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-domain-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-http-only # Allow outbound HTTP requests towards "*.bar.com".
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other
+ - http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will
+ host: "*.bar.com" # not be considered.
+```
+
+The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-http-only
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: http-only # Allow inbound HTTP requests only.
+ action: Allow # As the rule's "from" and "ports" are empty, which means it selects traffic from any network
+ l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-HTTP requests will be
+ - http: {} # automatically dropped, and subsequent rules will not be considered.
+```
+
+### TLS
+
+An example layer 7 NetworkPolicy for the TLS protocol is like below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-tls-handshake
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-tls # Allow inbound TLS/SSL handshake packets to server name "foo.bar.com" from Pods with label "app=client".
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - tls:
+ sni: "foo.bar.com"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without label "app=client" or from external clients).
+ action: Drop
+```
+
+**sni**: The `sni` field matches the TLS/SSL Server Name Indication (SNI) field in the TLS/SSL handshake process. Both
+exact matches and wildcards are supported, e.g. `*.foo.com`, `*.foo.*`, `foo.bar.com`. If not set, the rule matches all names.
+
+#### More examples
+
+The following NetworkPolicy prevents applications from accessing unauthorized SSL/TLS server names:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-tls-handshake-to-internal
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-tls-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-tls-only # Allow outbound SSL/TLS handshake packets towards "*.bar.com".
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port of any transport protocol, all outbound SSL/TLS handshake packets towards
+ - tls: # other server names and non-SSL/non-TLS handshake packets will be automatically dropped,
+ sni: "*.bar.com" # and subsequent rules will not be considered.
+```
+
+The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-tls-only
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: tls-only # Allow inbound SSL/TLS handshake packets only.
+ action: Allow # As the rule's "from" and "ports" are empty, which means it selects traffic from any network
+ l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-SSL/non-TLS handshake
+ - tls: {} # packets will be automatically dropped, and subsequent rules will not be considered.
+```
+
+### Logs
+
+Layer 7 traffic that matches the NetworkPolicy will be logged in an event
+triggered log file (`/var/log/antrea/networkpolicy/l7engine/eve-YEAR-MONTH-DAY.json`).
+Logs are categorized by **event_type**. The event type for allowed traffic is `http`,
+for dropped traffic it is `alert`. If `enableLogging` is set for the rule, dropped
+packets that match the rule will also be logged in addition to the event with
+event type `packet`. Below are examples for allow, drop, packet scenarios.
+
+Allow ingress from client (10.10.1.9) to web (10.10.1.10/public/*).
+
+```json
+{
+ "timestamp": "2024-08-26T22:37:30.895673+0000",
+ "flow_id": 742847661553363,
+ "in_iface": "antrea-l7-tap0",
+ "event_type": "http",
+ "vlan": [
+ 2
+ ],
+ "src_ip": "10.10.1.9",
+ "src_port": 55822,
+ "dest_ip": "10.10.1.10",
+ "dest_port": 80,
+ "proto": "TCP",
+ "pkt_src": "wire/pcap",
+ "tenant_id": 2,
+ "tx_id": 0,
+ "http": {
+ "hostname": "10.10.1.10",
+ "url": "/public/index.html",
+ "http_user_agent": "curl/7.81.0",
+ "http_content_type": "text/html",
+ "http_method": "GET",
+ "protocol": "HTTP/1.1",
+ "status": 200,
+ "length": 0
+ }
+}
+```
+
+Deny ingress from client (10.10.1.4) to web (10.10.1.3/admin/*).
+
+```json
+{
+ "timestamp": "2024-09-05T22:49:24.788756+0000",
+ "flow_id": 1131530446896560,
+ "in_iface": "antrea-l7-tap0",
+ "event_type": "alert",
+ "vlan": [
+ 2
+ ],
+ "src_ip": "10.10.1.4",
+ "src_port": 45034,
+ "dest_ip": "10.10.1.3",
+ "dest_port": 80,
+ "proto": "TCP",
+ "pkt_src": "wire/pcap",
+ "tenant_id": 2,
+ "alert": {
+ "action": "blocked",
+ "gid": 1,
+ "signature_id": 1,
+ "rev": 0,
+ "signature": "Reject by AntreaNetworkPolicy:default/allow-privileged-url-to-admin-role",
+ "category": "",
+ "severity": 3,
+ "tenant_id": 2
+ },
+ "app_proto": "http",
+ "direction": "to_server",
+ "flow": {
+ "pkts_toserver": 3,
+ "pkts_toclient": 1,
+ "bytes_toserver": 307,
+ "bytes_toclient": 78,
+ "start": "2024-09-05T22:49:24.787742+0000",
+ "src_ip": "10.10.1.4",
+ "dest_ip": "10.10.1.3",
+ "src_port": 45034,
+ "dest_port": 80
+ }
+}
+```
+
+Additional packet logs are available when `enableLogging` is set, which tracks all
+packets in Suricata matching the dst IP address of the packet generating the alert.
+
+```json
+{
+ "timestamp": "2024-09-05T22:49:24.788756+0000",
+ "flow_id": 1131530446896560,
+ "in_iface": "antrea-l7-tap0",
+ "event_type": "packet",
+ "vlan": [
+ 2
+ ],
+ "src_ip": "10.10.1.4",
+ "src_port": 45034,
+ "dest_ip": "10.10.1.3",
+ "dest_port": 80,
+ "proto": "TCP",
+ "pkt_src": "wire/pcap",
+ "tenant_id": 2,
+ "packet": "dtwWezuaHlOhfWpNgQAAAggARQAAjU/0QABABtRcCgoBBAoKAQOv6gBQgOZTvPTauPuAGAH7TZcAAAEBCAouFZzsR8fBM0dFVCAvYWRtaW4vaW5kZXguaHRtbCBIVFRQLzEuMQ0KSG9zdDogMTAuMTAuMS4zDQpVc2VyLUFnZW50OiBjdXJsLzcuNzQuMA0KQWNjZXB0OiAqLyoNCg0K",
+ "packet_info": {
+ "linktype": 1
+ }
+}
+```
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux.
diff --git a/content/docs/v2.2.0-alpha.2/docs/antrea-network-policy.md b/content/docs/v2.2.0-alpha.2/docs/antrea-network-policy.md
new file mode 100644
index 00000000..c2bc7f38
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/antrea-network-policy.md
@@ -0,0 +1,2054 @@
+# Antrea Network Policy CRDs
+
+## Table of Contents
+
+
+- [Summary](#summary)
+- [Tier](#tier)
+ - [Tier CRDs](#tier-crds)
+ - [Static tiers](#static-tiers)
+ - [kubectl commands for Tier](#kubectl-commands-for-tier)
+- [Antrea ClusterNetworkPolicy](#antrea-clusternetworkpolicy)
+ - [The Antrea ClusterNetworkPolicy resource](#the-antrea-clusternetworkpolicy-resource)
+ - [ACNP with stand-alone selectors](#acnp-with-stand-alone-selectors)
+ - [ACNP with ClusterGroup reference](#acnp-with-clustergroup-reference)
+ - [ACNP for complete Pod isolation in selected Namespaces](#acnp-for-complete-pod-isolation-in-selected-namespaces)
+ - [ACNP for strict Namespace isolation](#acnp-for-strict-namespace-isolation)
+ - [ACNP for default zero-trust cluster security posture](#acnp-for-default-zero-trust-cluster-security-posture)
+ - [ACNP for toServices rule](#acnp-for-toservices-rule)
+ - [ACNP for ICMP traffic](#acnp-for-icmp-traffic)
+ - [ACNP for IGMP traffic](#acnp-for-igmp-traffic)
+ - [ACNP for multicast egress traffic](#acnp-for-multicast-egress-traffic)
+ - [ACNP for HTTP traffic](#acnp-for-http-traffic)
+ - [ACNP for Kubernetes Node traffic](#acnp-for-kubernetes-node-traffic)
+ - [ACNP with log settings](#acnp-with-log-settings)
+ - [Behavior of to and from selectors](#behavior-of-to-and-from-selectors)
+ - [Key differences from K8s NetworkPolicy](#key-differences-from-k8s-networkpolicy)
+ - [kubectl commands for Antrea ClusterNetworkPolicy](#kubectl-commands-for-antrea-clusternetworkpolicy)
+- [Antrea NetworkPolicy](#antrea-networkpolicy)
+ - [The Antrea NetworkPolicy resource](#the-antrea-networkpolicy-resource)
+ - [Key differences from Antrea ClusterNetworkPolicy](#key-differences-from-antrea-clusternetworkpolicy)
+ - [Antrea NetworkPolicy with Group reference](#antrea-networkpolicy-with-group-reference)
+ - [kubectl commands for Antrea NetworkPolicy](#kubectl-commands-for-antrea-networkpolicy)
+- [Antrea-native Policy ordering based on priorities](#antrea-native-policy-ordering-based-on-priorities)
+ - [Ordering based on Tier priority](#ordering-based-on-tier-priority)
+ - [Ordering based on policy priority](#ordering-based-on-policy-priority)
+ - [Rule enforcement based on priorities](#rule-enforcement-based-on-priorities)
+- [Advanced peer selection mechanisms of Antrea-native Policies](#advanced-peer-selection-mechanisms-of-antrea-native-policies)
+ - [Selecting Namespace by Name](#selecting-namespace-by-name)
+ - [K8s clusters with version 1.21 and above](#k8s-clusters-with-version-121-and-above)
+ - [K8s clusters with version 1.20 and below](#k8s-clusters-with-version-120-and-below)
+ - [Selecting Pods in the same Namespace with Self](#selecting-pods-in-the-same-namespace-with-self)
+ - [Selecting Namespaces with the same label values using SameLabels](#selecting-namespaces-with-the-same-label-values-using-samelabels)
+ - [FQDN based filtering](#fqdn-based-filtering)
+ - [Node Selector](#node-selector)
+ - [toServices egress rules](#toservices-egress-rules)
+ - [ServiceAccount based selection](#serviceaccount-based-selection)
+ - [Apply to NodePort Service](#apply-to-nodeport-service)
+- [ClusterGroup](#clustergroup)
+ - [ClusterGroup CRD](#clustergroup-crd)
+ - [kubectl commands for ClusterGroup](#kubectl-commands-for-clustergroup)
+- [Group](#group)
+ - [Group CRD](#group-crd)
+ - [Restrictions and Key differences from ClusterGroup](#restrictions-and-key-differences-from-clustergroup)
+ - [kubectl commands for Group](#kubectl-commands-for-group)
+- [RBAC](#rbac)
+- [Notes and constraints](#notes-and-constraints)
+ - [Limitations of Antrea policy logging](#limitations-of-antrea-policy-logging)
+ - [Logging prior to Antrea v1.13](#logging-prior-to-antrea-v113)
+
+
+## Summary
+
+Antrea supports standard K8s NetworkPolicies to secure ingress/egress traffic for
+Pods. These NetworkPolicies are written from an application developer's perspective,
+hence they lack the ability to gain a finer-grained control over the security
+policies that a cluster administrator would require. This document describes a
+few new CRDs supported by Antrea to provide the administrator with more control
+over security within the cluster, and which are meant to co-exist with and
+complement the K8s NetworkPolicy.
+
+Starting with Antrea v1.0, Antrea-native policies are enabled by default, which
+means that no additional configuration is required in order to use the
+Antrea-native policy CRDs.
+
+## Tier
+
+Antrea supports grouping Antrea-native policy CRDs together in a tiered fashion
+to provide a hierarchy of security policies. This is achieved by setting the
+`tier` field when defining an Antrea-native policy CRD (e.g. an Antrea
+ClusterNetworkPolicy object) to the appropriate Tier name. Each Tier has a
+priority associated with it, which determines its relative order among other Tiers.
+
+**Note**: K8s NetworkPolicies will be enforced once all policies in all Tiers (except
+for the baseline Tier) have been enforced. For more information, refer to the following
+[Static Tiers section](#static-tiers)
+
+### Tier CRDs
+
+Creating Tiers as CRDs allows users the flexibility to create and delete
+Tiers as per their preference i.e. not be bound to 5 static tiering options
+as was the case initially.
+
+An example Tier might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Tier
+metadata:
+ name: mytier
+spec:
+ priority: 10
+ description: "my custom tier"
+```
+
+Tiers have the following characteristics:
+
+- Policies can associate themselves with an existing Tier by setting the `tier`
+ field in an Antrea NetworkPolicy CRD spec to the Tier's name.
+- A Tier must exist before an Antrea-native policy can reference it.
+- Policies associated with higher ordered (low `priority` value) Tiers are
+ enforced first.
+- No two Tiers can be created with the same priority.
+- Updating the Tier's `priority` field is unsupported.
+- Deleting Tier with existing references from policies is not allowed.
+
+### Static tiers
+
+On startup, antrea-controller will create 5 static, read-only Tier CRD resources
+corresponding to the static tiers for default consumption, as well as a "baseline"
+Tier CRD object, that will be enforced after developer-created K8s NetworkPolicies.
+The details for these Tiers are shown below:
+
+```text
+ Emergency -> Tier name "emergency" with priority "50"
+ SecurityOps -> Tier name "securityops" with priority "100"
+ NetworkOps -> Tier name "networkops" with priority "150"
+ Platform -> Tier name "platform" with priority "200"
+ Application -> Tier name "application" with priority "250"
+ Baseline -> Tier name "baseline" with priority "253"
+```
+
+Any Antrea-native policy CRD referencing a static tier in its spec will now internally
+reference the corresponding Tier resource, thus maintaining the order of enforcement.
+
+The static Tier CRD Resources are created as follows in the relative order of
+precedence compared to K8s NetworkPolicies:
+
+```text
+ Emergency > SecurityOps > NetworkOps > Platform > Application > K8s NetworkPolicy > Baseline
+```
+
+Thus, all Antrea-native Policy resources associated with the "emergency" Tier will be
+enforced before any Antrea-native Policy resource associated with any other
+Tiers, until a match occurs, in which case the policy rule's `action` will be
+applied. **Any Antrea-native Policy resource without a `tier` name set in its spec
+will be associated with the "application" Tier.** Policies associated with the first
+5 static, read-only Tiers, as well as with all the custom Tiers created with a priority
+value lower than 250 (priority values greater than or equal to 250 are not allowed
+for custom Tiers), will be enforced before K8s NetworkPolicies.
+
+Policies created in the "baseline" Tier, on the other hand, will have lower precedence
+than developer-created K8s NetworkPolicies, which comes in handy when administrators
+want to enforce baseline policies like "default-deny inter-namespace traffic" for some
+specific Namespace, while still allowing individual developers to lift the restriction
+if needed using K8s NetworkPolicies.
+
+Note that baseline policies cannot counteract the isolated Pod behavior provided by
+K8s NetworkPolicies. To read more about this Pod isolation behavior, refer to [this
+document](https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-two-sorts-of-pod-isolation).
+If a Pod becomes isolated because a K8s NetworkPolicy is applied to it, and the policy
+does not explicitly allow communications with another Pod, this behavior cannot be changed
+by creating an Antrea-native policy with an "allow" action in the "baseline" Tier.
+For this reason, it generally does not make sense to create policies in the "baseline"
+Tier with the "allow" action.
+
+### *kubectl* commands for Tier
+
+The following `kubectl` commands can be used to retrieve Tier resources:
+
+```bash
+ # Use long name
+ kubectl get tiers
+
+ # Use long name with API Group
+ kubectl get tiers.crd.antrea.io
+
+ # Use short name
+ kubectl get tr
+
+ # Use short name with API Group
+ kubectl get tr.crd.antrea.io
+
+ # Sort output by Tier priority
+ kubectl get tiers --sort-by=.spec.priority
+```
+
+All the above commands produce output similar to what is shown below:
+
+```text
+ NAME PRIORITY AGE
+ emergency 50 27h
+ securityops 100 27h
+ networkops 150 27h
+ platform 200 27h
+ application 250 27h
+```
+
+## Antrea ClusterNetworkPolicy
+
+Antrea ClusterNetworkPolicy (ACNP), one of the two Antrea-native policy CRDs
+introduced, is a specification of how workloads within a cluster communicate
+with each other and other external endpoints. The ClusterNetworkPolicy is
+supposed to aid cluster admins to configure the security policy for the
+cluster, unlike K8s NetworkPolicy, which is aimed towards developers to secure
+their apps and affects Pods within the Namespace in which the K8s NetworkPolicy
+is created. Rules belonging to ClusterNetworkPolicies are enforced before any
+rule belonging to a K8s NetworkPolicy.
+
+### The Antrea ClusterNetworkPolicy resource
+
+Example ClusterNetworkPolicies might look like these:
+
+#### ACNP with stand-alone selectors
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-stand-alone-selectors
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: db
+ - namespaceSelector:
+ matchLabels:
+ env: prod
+ ingress:
+ - action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: frontend
+ - podSelector:
+ matchLabels:
+ role: nondb
+ namespaceSelector:
+ matchLabels:
+ role: db
+ ports:
+ - protocol: TCP
+ port: 8080
+ endPort: 9000
+ - protocol: TCP
+ port: 6379
+ name: AllowFromFrontend
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 10.0.10.0/24
+ ports:
+ - protocol: TCP
+ port: 5978
+ name: DropToThirdParty
+```
+
+#### ACNP with ClusterGroup reference
+
+Refer to the [ClusterGroup section](#clustergroup) for more information regarding the ClusterGroup resource.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-cluster-groups
+spec:
+ priority: 8
+ tier: securityops
+ appliedTo:
+ - group: "test-cg-with-db-selector" # defined separately with a ClusterGroup resource
+ ingress:
+ - action: Allow
+ from:
+ - group: "test-cg-with-frontend-selector" # defined separately with a ClusterGroup resource
+ ports:
+ - protocol: TCP
+ port: 8080
+ endPort: 9000
+ - protocol: TCP
+ port: 6379
+ name: AllowFromFrontend
+ egress:
+ - action: Drop
+ to:
+ - group: "test-cg-with-ip-block" # defined separately with a ClusterGroup resource
+ ports:
+ - protocol: TCP
+ port: 5978
+ name: DropToThirdParty
+```
+
+#### ACNP for complete Pod isolation in selected Namespaces
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: isolate-all-pods-in-namespace
+spec:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - namespaceSelector:
+ matchLabels:
+ app: no-network-access-required
+ ingress:
+ - action: Drop # For all Pods in those Namespaces, drop and log all ingress traffic from anywhere
+ name: drop-all-ingress
+ egress:
+ - action: Drop # For all Pods in those Namespaces, drop and log all egress traffic towards anywhere
+ name: drop-all-egress
+```
+
+#### ACNP for strict Namespace isolation
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: strict-ns-isolation
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - namespaceSelector: # Selects all non-system Namespaces in the cluster
+ matchExpressions:
+ - {key: kubernetes.io/metadata.name, operator: NotIn, values: [kube-system]}
+ ingress:
+ - action: Pass
+ from:
+ - namespaces:
+ match: Self # Skip ACNP evaluation for traffic from Pods in the same Namespace
+ name: PassFromSameNS
+ - action: Drop
+ from:
+ - namespaceSelector: {} # Drop from Pods from all other Namespaces
+ name: DropFromAllOtherNS
+ egress:
+ - action: Pass
+ to:
+ - namespaces:
+ match: Self # Skip ACNP evaluation for traffic to Pods in the same Namespace
+ name: PassToSameNS
+ - action: Drop
+ to:
+ - namespaceSelector: {} # Drop to Pods from all other Namespaces
+ name: DropToAllOtherNS
+```
+
+#### ACNP for default zero-trust cluster security posture
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: default-cluster-deny
+spec:
+ priority: 1
+ tier: baseline
+ appliedTo:
+ - namespaceSelector: {} # Selects all Namespaces in the cluster
+ ingress:
+ - action: Drop
+ egress:
+ - action: Drop
+```
+
+#### ACNP for toServices rule
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-drop-to-services
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: client
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ egress:
+ - action: Drop
+ toServices:
+ - name: svcName
+ namespace: svcNamespace
+ name: DropToServices
+```
+
+#### ACNP for ICMP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-reject-ping-request
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: server
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ egress:
+ - action: Reject
+ protocols:
+ - icmp:
+ icmpType: 8
+ icmpCode: 0
+ name: DropPingRequest
+```
+
+#### ACNP for IGMP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-igmp-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: mcjoin6
+ ingress:
+ - action: Drop
+ protocols:
+ - igmp:
+ igmpType: 0x11
+ groupAddress: 224.0.0.1
+ name: dropIGMPQuery
+ egress:
+ - action: Drop
+ protocols:
+ - igmp:
+ igmpType: 0x16
+ groupAddress: 225.1.2.3
+ name: dropIGMPReport
+```
+
+#### ACNP for multicast egress traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-multicast-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: mcjoin6
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 225.1.2.3/32
+ name: dropMcastUDPTraffic
+```
+
+#### ACNP for HTTP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-http-request-to-api-v2
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-http # Allow inbound HTTP GET requests to "/api/v2" from Pods with app=client label.
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - http:
+ path: "/api/v2/*"
+ host: "foo.bar.com"
+ method: "GET"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without the app=client label or from external clients).
+ action: Drop
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-web-access-to-internal-domain
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-domain-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-http-only # Allow outbound HTTP requests towards foo.bar.com.
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other
+ - http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will
+ host: "*.bar.com" # not be considered.
+```
+
+Please refer to [Antrea Layer 7 NetworkPolicy](antrea-l7-network-policy.md) for extra information.
+
+#### ACNP for Kubernetes Node traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-node-egress-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - nodeSelector:
+ matchLabels:
+ kubernetes.io/os: linux
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 192.168.1.0/24
+ ports:
+ - protocol: TCP
+ port: 80
+ name: dropHTTPTrafficToCIDR
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-node-ingress-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - nodeSelector:
+ matchLabels:
+ kubernetes.io/os: linux
+ ingress:
+ - action: Drop
+ from:
+ - ipBlock:
+ cidr: 192.168.1.0/24
+ ports:
+ - protocol: TCP
+ port: 22
+ name: dropSSHTrafficFromCIDR
+```
+
+Please refer to [Antrea Node NetworkPolicy](antrea-node-network-policy.md) for more information.
+
+#### ACNP with log settings
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-log-setting
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: db
+ - namespaceSelector:
+ matchLabels:
+ env: prod
+ ingress:
+ - action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: frontend
+ namespaceSelector:
+ matchLabels:
+ role: db
+ name: AllowFromFrontend
+ enableLogging: true
+ logLabel: "frontend-allowed"
+```
+
+**spec**: The ClusterNetworkPolicy `spec` has all the information needed to
+define a cluster-wide security policy.
+
+**appliedTo**: The `appliedTo` field at the policy level specifies the
+grouping criteria of Pods to which the policy applies to. Pods can be
+selected cluster-wide using `podSelector`. If set with a `namespaceSelector`,
+all Pods from Namespaces selected by the namespaceSelector will be selected.
+Specific Pods from specific Namespaces can be selected by providing both a
+`podSelector` and a `namespaceSelector` in the same `appliedTo` entry.
+The `appliedTo` field can also reference a ClusterGroup resource by setting
+the ClusterGroup's name in `group` field in place of the stand-alone selectors.
+The `appliedTo` field can also reference a Service by setting the Service's name
+and Namespace in `service` field in place of the stand-alone selectors. Only a
+NodePort Service can be referred by this field. More details can be found in the
+[ApplyToNodePortService](#apply-to-nodeport-service) section.
+IPBlock cannot be set in the `appliedTo` field.
+An IPBlock ClusterGroup referenced in an `appliedTo` field will be ignored,
+and the policy will have no effect.
+This `appliedTo` field must not be set, if `appliedTo` per
+rule is used.
+
+In the [first example](#acnp-with-stand-alone-selectors), the policy applies to Pods, which either match the labels
+"role=db" in all the Namespaces, or are from Namespaces which match the
+labels "env=prod".
+The [second example](#acnp-with-clustergroup-reference) policy applies to all network endpoints selected by the
+"test-cg-with-db-selector" ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy applies to all Pods in the
+Namespaces that matches label "app=no-network-access-required".
+`appliedTo' also supports ServiceAccount based selection. This allows users using ServiceAccount to select Pods.
+More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+
+**priority**: The `priority` field determines the relative priority of the
+policy among all ClusterNetworkPolicies in the given cluster. This field is
+mandatory. A lower priority value indicates higher precedence. Priority values
+can range from 1.0 to 10000.0.
+**Note**: Policies with the same priorities will be enforced
+indeterministically. Users should therefore take care to use priorities to
+ensure the behavior they expect.
+
+**tier**: The `tier` field associates an ACNP to an existing Tier. The `tier`
+field can be set with the name of the Tier CRD to which this policy must be
+associated with. If not set, the ACNP is associated with the lowest priority
+default tier i.e. the "application" Tier.
+
+**action**: Each ingress or egress rule of a ClusterNetworkPolicy must have the
+`action` field set. As of now, the available actions are ["Allow", "Drop", "Reject", "Pass"].
+When the rule action is "Allow" or "Drop", Antrea will allow or drop traffic which
+matches both `from/to`, `ports` and `protocols` sections of that rule, given that traffic does not
+match a higher precedence rule in the cluster (ACNP rules created in higher order
+Tiers or policy instances in the same Tier with lower priority number). If a "Reject"
+rule is matched, the client initiating the traffic will receive `ICMP host administratively
+prohibited` code for ICMP, UDP and SCTP request, or an explicit reject response for
+TCP request, instead of timeout. A "Pass" rule, on the other hand, skips this packet
+for further Antrea-native policy rule evaluations in regular Tiers, and delegates
+the decision to K8s namespaced NetworkPolicies (in networking.k8s.io API group).
+All ACNP/ANNP rules that have lower priority than the current "Pass" rule will be
+skipped (except for the Baseline Tier rules). If no K8s NetworkPolicy matches this
+traffic, then all Antrea-native policy Baseline Tier rules will be tested for a match.
+Note that the "Pass" action does not make sense when configured in Baseline Tier
+ACNP rules, and such configurations will be rejected by the admission controller.
+Also, "Pass" and "Reject" actions are not supported for rules applied to multicast
+traffic.
+
+**ingress**: Each ClusterNetworkPolicy may consist of zero or more ordered set of
+ingress rules. Under `ports`, the optional field `endPort` can only be set when a
+numerical `port` is set to represent a range of ports from `port` to `endPort` inclusive.
+`protocols` defines additional protocols that are not supported by `ports`.
+Currently only ICMP protocol and IGMP protocol are under `protocols`. For `ICMP`
+protocol, `icmpType` and `icmpCode` could be used to specify the ICMP traffic that
+this rule matches. And for `IGMP` protocol, `igmpType` and `groupAddress` can be
+used to specify the IGMP traffic that this rule matches. Currently, only IGMP
+query is supported in ingress rules. Other IGMP types and multicast data traffic
+are not supported for ingress rules. Valid `igmpType` is:
+
+message type | value
+-- | --
+Membership Query | 0x11
+
+The group address in IGMP query packets can only be 224.0.0.1. As for Group-Specific
+IGMP query, which encodes the target group in the IGMP message, it is not supported
+yet because OVS can not recognize the address. Protocol `IGMP` can not be used with
+`ICMP` or properties like `from`, `to`, `ports` and `toServices`.
+
+Also, each rule has an optional `name` field, which should be unique within
+the policy describing the intention of this rule. If `name` is not provided for
+a rule, it will be auto-generated by Antrea. The auto-generated name will be
+of format `[ingress/egress]-[action]-[uid]`, e.g. ingress-allow-2f0ed6e,
+where [uid] is the first 7 bits of hash value of the rule based on sha1 algorithm.
+If a policy contains duplicate rules, or if a rule name is same as the auto-generated
+name of some other rules in the same policy, it will cause a conflict,
+and the policy will be rejected.
+A ClusterGroup name can be set in the `group` field of an ingress `from` section in place
+of stand-alone selectors to allow traffic from workloads/ipBlocks set in the ClusterGroup.
+
+The [first example](#acnp-with-stand-alone-selectors) policy contains a single rule, which allows matched traffic on a
+single port, from one of two sources: the first specified by a `podSelector`
+and the second specified by a combination of a `podSelector` and a
+`namespaceSelector`.
+The [second example](#acnp-with-clustergroup-reference) policy contains a single rule, which allows matched traffic on
+multiple TCP ports (8000 through 9000 included, plus 6379) from all network endpoints
+selected by the "test-cg-with-frontend-selector" ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy contains a single rule,
+which drops all ingress traffic towards any Pod in Namespaces that have label `app` set to
+`no-network-access-required`. Note that an empty `From` in the ingress rule means that
+this rule matches all ingress sources.
+Ingress `From` section also supports ServiceAccount based selection. This allows users to use ServiceAccount
+to select Pods. More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+**Note**: The order in which the ingress rules are specified matters, i.e., rules will
+be enforced in the order in which they are written.
+
+**egress**: Each ClusterNetworkPolicy may consist of zero or more ordered set
+of egress rules. Each rule, depending on the `action` field of the rule, allows
+or drops traffic which matches all `to`, `ports` sections.
+Under `ports`, the optional field `endPort` can only be set when a numerical `port`
+is set to represent a range of ports from `port` to `endPort` inclusive.
+`protocols` defines additional protocols that are not supported by `ports`. Currently, only
+ICMP protocol and IGMP protocol are under `protocols`. For `ICMP` protocol, `icmpType`
+and `icmpCode` could be used to specify the ICMP traffic that this rule matches. And
+for `IGMP` protocol, `igmpType` and `groupAddress` can be used to specify the IGMP
+traffic that this rule matches. If `igmpType` is not set, all reports will be matched.
+If `groupAddress` is empty, then all multicast group addresses will be matched here.
+Only IGMP reports are supported in egress rules. Protocol `IGMP` can not be used with
+`ICMP` or properties like `from`, `to`, `ports` and `toServices`. Valid `igmpType` are:
+
+message type | value
+-- | --
+IGMPv1 Membership Report | 0x12
+IGMPv2 Membership Report | 0x16
+IGMPv3 Membership Report | 0x22
+
+Also, each rule has an optional `name` field, which should be unique within
+the policy describing the intention of this rule. If `name` is not provided for
+a rule, it will be auto-generated by Antrea. The rule name auto-generation process
+is the same as ingress rules.
+A ClusterGroup name can be set in the `group` field of an egress `to` section in place
+of stand-alone selectors to allow traffic to workloads/ipBlocks set in the ClusterGroup.
+`toServices` field contains a list of combinations of Service Namespace and Service Name
+to match traffic to this Service.
+
+More details can be found in the [toServices](#toservices-egress-rules) section.
+The [first example](#acnp-with-stand-alone-selectors) policy contains a single rule, which drops matched traffic on a
+single port, to the 10.0.10.0/24 subnet specified by the `ipBlock` field.
+The [second example](#acnp-with-clustergroup-reference) policy contains a single rule, which drops matched traffic on
+TCP port 5978 to all network endpoints selected by the "test-cg-with-ip-block"
+ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy contains a single rule,
+which drops all egress traffic initiated by any Pod in Namespaces that have `app` set to
+`no-network-access-required`.
+The [sixth example](#acnp-for-toservices-rule) policy contains a single rule,
+which drops traffic from "role: client" labeled Pods from "env: prod" labeled Namespaces to Service svcNamespace/svcName
+via ClusterIP.
+Note that an empty `to` + an empty `toServices` in the egress rule means that
+this rule matches all egress destinations.
+Egress `To` section also supports FQDN based filtering. This can be applied to exact FQDNs or
+wildcard expressions. More details can be found in the [FQDN](#fqdn-based-filtering) section.
+Egress `To` section also supports ServiceAccount based selection. This allows users to use ServiceAccount
+to select Pods. More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+**Note**: The order in which the egress rules are specified matters, i.e., rules will
+be enforced in the order in which they are written.
+
+**enableLogging** and **logLabel**: Antrea-native policy ingress or egress rules
+can be audited by setting its logging fields. When the `enableLogging` field is set
+to `true`, the first packet of any traffic flow that matches this rule will be
+logged to a file (`/var/log/antrea/networkpolicy/np.log`) on the Node on which the
+rule is enforced. The log files can then be used for further analysis. If `logLabel`
+is provided, the label will be added in the log. For example, in the
+[ACNP with log settings](#acnp-with-log-settings), traffic that hits the
+"AllowFromFrontend" rule will be logged with log label "frontend-allowed".
+
+The logging feature is best-effort, and as such there is no guarantee that all
+the flows which match the policy rule will be logged. Additionally, we do not
+recommend enabling policy logging for older Antrea versions (all versions prior
+to v1.12, as well as v1.12.0 and v1.12.1). See this [section](#limitations-of-antrea-policy-logging)
+for more information.
+
+For drop and reject rules, deduplication is applied to reduce duplicated
+log messages, and the duplication buffer length is set to 1 second. When a rule
+does not have a name, an identifiable name will be generated for the rule and
+added to the log. For rules in layer 7 NetworkPolicy, packets are logged with
+action `Redirect` prior to analysis by the layer 7 engine, and the layer 7 engine
+can log more information in its own logs.
+
+The rules are logged in the following format:
+
+```text
+
Rules is a list of rules to be applied to the selected GroupMembers.
+
+
+
+
+appliedToGroups
+
+[]string
+
+
+
+
AppliedToGroups is a list of names of AppliedToGroups to which this policy applies.
+Cannot be set in conjunction with any NetworkPolicyRule.AppliedToGroups in Rules.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority represents the relative priority of this Network Policy as compared to
+other Network Policies. Priority will be unset (nil) for K8s NetworkPolicy.
+
+
+
+
+tierPriority
+
+int32
+
+
+
+
TierPriority represents the priority of the Tier associated with this Network
+Policy. The TierPriority will remain nil for K8s NetworkPolicy.
ObjectMeta was omitted by mistake when this type was first defined, and was added later on.
+To ensure backwards-compatibility, we had to use Protobuf field number 3 when adding the
+field, as 1 was already taken by the request field. This is unusual, as K8s API types
+always use 1 as the Protobuf field number for the metadata field, and that’s also what we
+do for all other Antrea API types. It should only affect the wire format, and nothing else.
+When a new version of this API is introduced in the future (e.g., v1), we can correct
+this and assign 1 as the Protobuf field number for the metadata field.
+Refer to the Kubernetes API documentation for the fields of the
+metadata field.
+
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
Action specifies the action to be applied on the rule. i.e. Allow/Drop. An empty
+action “nil” defaults to Allow action, which would be the case for rules created for
+K8s Network Policy.
+
+
+
+
+enableLogging
+
+bool
+
+
+
+
EnableLogging indicates whether or not to generate logs when rules are matched. Default to false.
+
+
+
+
+appliedToGroups
+
+[]string
+
+
+
+
AppliedToGroups is a list of names of AppliedToGroups to which this rule applies.
+Cannot be set in conjunction with NetworkPolicy.AppliedToGroups of the NetworkPolicy
+that this Rule is referred to.
+
+
+
+
+name
+
+string
+
+
+
+
Name describes the intention of this rule.
+Name should be unique within the policy.
Port and EndPort can only be specified, when the Protocol is TCP, UDP, or SCTP.
+Port defines the port name or number on the given protocol. If not specified
+and the Protocol is TCP, UDP, or SCTP, this matches all port numbers.
+
+
+
+
+endPort
+
+int32
+
+
+
+(Optional)
+
EndPort defines the end of the port range, being the end included within the range.
+It can only be specified when a numerical port is specified.
+
+
+
+
+icmpType
+
+int32
+
+
+
+(Optional)
+
ICMPType and ICMPCode can only be specified, when the Protocol is ICMP. If they
+both are not specified and the Protocol is ICMP, this matches all ICMP traffic.
+
+
+
+
+icmpCode
+
+int32
+
+
+
+
+
+
+
+igmpType
+
+int32
+
+
+
+(Optional)
+
IGMPType and GroupAddress can only be specified when the Protocol is IGMP.
+
+
+
+
+groupAddress
+
+string
+
+
+
+
+
+
+
+srcPort
+
+int32
+
+
+
+(Optional)
+
SrcPort and SrcEndPort can only be specified, when the Protocol is TCP, UDP, or SCTP.
+It restricts the source port of the traffic.
NodeSelector selects Nodes to which the BGPPolicy is applied. If multiple BGPPolicies select a Node, only one
+will be effective and enforced; others serve as alternatives.
+
+
+
+
+localASN
+
+int32
+
+
+
+
LocalASN is the AS number used by the BGP process. The available private AS number range is 64512-65535.
+
+
+
+
+listenPort
+
+int32
+
+
+
+
ListenPort is the port on which the BGP process listens, and the default value is 179.
Only one network interface is supported now.
+Other interfaces except interfaces[0] will be ignored if there are more than one interfaces.
+
+
+
+
+
+
+
+
NodeLatencyMonitor
+
+
+
NodeLatencyMonitor is used to monitor the latency between nodes in a Kubernetes cluster. It is a singleton resource,
+meaning only one instance of it can exist in the cluster.
ExpirationMinutes is the requested duration of validity of the SupportBundleCollection.
+A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+Default is 60.
+
+
+
+
+sinceTime
+
+string
+
+
+
+
SinceTime specifies a relative time before the current time from which to collect logs
+A valid value is like: 1d, 2h, 30m.
Pod specifies how to advertise Pod IPs. Currently, if this is set, NodeIPAM Pod CIDR instead of specific Pods IPs
+will be advertised since pod selector is not added yet.
The port number on which the BGP peer listens. The default value is 179, the well-known port of BGP protocol.
+
+
+
+
+asn
+
+int32
+
+
+
+
The AS number of the BGP peer.
+
+
+
+
+multihopTTL
+
+int32
+
+
+
+
The Time To Live (TTL) value used in BGP packets sent to the BGP peer. The range of the value is from 1 to 255,
+and the default value is 1.
+
+
+
+
+gracefulRestartTimeSeconds
+
+int32
+
+
+
+
GracefulRestartTimeSeconds specifies how long the BGP peer would wait for the BGP session to re-establish after
+a restart before deleting stale routes. The range of the value is from 1 to 3600, and the default value is 120.
NodeSelector selects Nodes to which the BGPPolicy is applied. If multiple BGPPolicies select a Node, only one
+will be effective and enforced; others serve as alternatives.
+
+
+
+
+localASN
+
+int32
+
+
+
+
LocalASN is the AS number used by the BGP process. The available private AS number range is 64512-65535.
+
+
+
+
+listenPort
+
+int32
+
+
+
+
ListenPort is the port on which the BGP process listens, and the default value is 179.
BundleFileServer specifies the bundle file server information.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+url
+
+string
+
+
+
+
The URL of the bundle file server. It is set with format: scheme://host[:port][/path],
+e.g, https://api.example.com:8443/v1/supportbundles/. If scheme is not set, https is used by default.
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
+
+
+
+
+
IPBlock
+
+
+
IPBlock describes a particular CIDR (Ex. “192.168.1.1⁄24”) that is allowed
+or denied to/from the workloads matched by a Spec.AppliedTo.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+cidr
+
+string
+
+
+
+
CIDR is a string representing the IP Block
+Valid examples are “192.168.1.1⁄24”.
IPTypes specifies the types of Service IPs from the selected Services to be advertised. Currently, all Services
+will be selected since Service selector is not added yet.
ExpirationMinutes is the requested duration of validity of the SupportBundleCollection.
+A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+Default is 60.
+
+
+
+
+sinceTime
+
+string
+
+
+
+
SinceTime specifies a relative time before the current time from which to collect logs
+A valid value is like: 1d, 2h, 30m.
ExternalNode is the opaque identifier of the agent/controller responsible
+for additional processing or handling of this external entity.
+
+
+
+
+
+
+
+
IPPool
+
+
+
IPPool defines one or multiple IP sets that can be used for flexible IPAM feature. For instance, the IPs can be
+allocated to Pods according to IP pool specified in Deployment annotation.
TrafficControl allows mirroring or redirecting the traffic Pods send or receive. It enables users to monitor and
+analyze Pod traffic, and to enforce custom network protections for Pods with fine-grained control over network
+traffic.
Select Pods matched by this selector. If set with NamespaceSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector;
+otherwise, Pods are matched from all Namespaces.
Select all Pods from Namespaces matched by this selector. If set with
+PodSelector, Pods are matched from Namespaces matched by the
+NamespaceSelector.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Specification of the desired behavior of ClusterNetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
EgressStatus represents the current status of an Egress.
+
+
+
+
+
ExternalIPPool
+
+
+
ExternalIPPool defines one or multiple IP sets that can be used in the external network. For instance, the IPs can be
+allocated to the Egress resources as the Egress IPs.
The Subnet info of this IP pool. If set, all IP ranges in the IP pool should share the same subnet attributes.
+Currently, it’s only used when an IP is allocated from the pool for Egress, and is ignored otherwise.
Group can be used in AntreaNetworkPolicies. When used with AppliedTo, it cannot include NamespaceSelector,
+otherwise, Antrea will not realize the NetworkPolicy or rule, but will just update the NetworkPolicy
+Status as “Unrealizable”.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
IPPool defines one or multiple IP sets that can be used for flexible IPAM feature. For instance, the IPs can be
+allocated to Pods according to IP pool specified in the Deployment annotation.
Specification of the desired behavior of NetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+int32
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
Select Pods from NetworkPolicy’s Namespace as workloads in
+AppliedTo fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in AppliedTo fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+(Optional)
+
Group is the name of the ClusterGroup which can be set as an
+AppliedTo in place of a stand-alone selector. A Group cannot
+be set with any other selector.
Select a certain Service which matches the NamespacedName.
+A Service can only be set in either policy level AppliedTo field in a policy
+that only has ingress rules or rule level AppliedTo field in an ingress rule.
+Only a NodePort Service can be referred by this field.
+Cannot be set with any other selector.
ClusterNetworkPolicySpec defines the desired state for ClusterNetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
EgressStatus represents the current status of an Egress.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+egressNode
+
+string
+
+
+
+
The name of the Node that holds the Egress IP.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP indicates the effective Egress IP for the selected workloads. It could be empty if the Egress IP in spec
+is not assigned to any Node. It’s also useful when there are more than one Egress IP specified in spec.
The Subnet info of this IP pool. If set, all IP ranges in the IP pool should share the same subnet attributes.
+Currently, it’s only used when an IP is allocated from the pool for Egress, and is ignored otherwise.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
ICMPProtocol matches ICMP traffic with specific ICMPType and/or ICMPCode. All
+fields could be used alone or together. If all fields are not provided, this
+matches all ICMP traffic.
IPBlock describes a particular CIDR (Ex. “192.168.1.0/24”) that is allowed
+or denied to/from the workloads matched by a Spec.AppliedTo.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+cidr
+
+string
+
+
+
+
CIDR is a string representing the IP Block
+Valid examples are “192.168.1.0/24”.
+
+
+
+
+except
+
+[]string
+
+
+
+(Optional)
+
except is a slice of CIDRs that should not be included within an IPBlock
+Valid examples are “192.168.1.0/28” or “2001:db8::/64”
+Except values will be rejected if they are outside the cidr range
IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector.
Select Pods from NetworkPolicy’s Namespace as workloads in
+To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select Pod/ExternalEntity from Namespaces matched by specific criteria.
+Current supported criteria is match: Self, which selects from the same
+Namespace of the appliedTo workloads.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. This field can only be set when NetworkPolicyPeer
+is created for ClusterNetworkPolicy ingress/egress rules.
+Cannot be set with NamespaceSelector.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+
Group is the name of the ClusterGroup which can be set within
+an Ingress or Egress rule in place of a stand-alone selector.
+A Group cannot be set with any other selector.
+
+
+
+
+fqdn
+
+string
+
+
+
+
Restrict egress access to the Fully Qualified Domain Names prescribed
+by name or by wildcard match patterns. This field can only be set for
+NetworkPolicyPeer of egress rules.
+Supported formats are:
+Exact FQDNs such as “google.com”.
+Wildcard expressions such as “*wayfair.com”.
The port on the given protocol. This can be either a numerical
+or named port on a Pod. If this field is not provided, this
+matches all port names and numbers.
+
+
+
+
+endPort
+
+int32
+
+
+
+(Optional)
+
EndPort defines the end of the port range, inclusive.
+It can only be specified when a numerical port is specified.
+
+
+
+
+sourcePort
+
+int32
+
+
+
+(Optional)
+
The source port on the given protocol. This can only be a numerical port.
+If this field is not provided, rule matches all source ports.
+
+
+
+
+sourceEndPort
+
+int32
+
+
+
+(Optional)
+
SourceEndPort defines the end of the source port range, inclusive.
+It can only be specified when sourcePort is specified.
NetworkPolicySpec defines the desired state for NetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
Rule describes the traffic allowed to/from the workloads selected by
+Spec.AppliedTo. Based on the action specified in the rule, traffic is either
+allowed or denied which exactly match the specified ports and protocol.
Set of layer 7 protocols matched by the rule. If this field is set, action can only be Allow.
+When this field is used in a rule, any traffic matching the other layer 3⁄4 criteria of the rule (typically the
+5-tuple) will be forwarded to an application-aware engine for protocol detection and rule enforcement, and the
+traffic will be allowed if the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any
+rules after a layer 7 rule will not be enforced for the traffic.
Rule is matched if traffic is intended for workloads selected by
+this field. This field can’t be used with ToServices. If this field
+and ToServices are both empty or missing this rule matches all destinations.
Rule is matched if traffic is intended for a Service listed in this field.
+Currently, only ClusterIP types Services are supported in this field.
+When scope is set to ClusterSet, it matches traffic intended for a multi-cluster
+Service listed in this field. Service name and Namespace provided should match
+the original exported Service.
+This field can only be used when AntreaProxy is enabled. This field can’t be used
+with To or Ports. If this field and To are both empty or missing, this rule matches
+all destinations.
+
+
+
+
+name
+
+string
+
+
+
+(Optional)
+
Name describes the intention of this rule.
+Name should be unique within the policy.
+
+
+
+
+enableLogging
+
+bool
+
+
+
+(Optional)
+
EnableLogging is used to indicate if agent should generate logs
+when rules are matched. Should be default to false.
+
+
+
+
+logLabel
+
+string
+
+
+
+(Optional)
+
LogLabel is a user-defined arbitrary string which will be printed in the NetworkPolicy logs.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+int32
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
StartTime is the time at which the Traceflow as started by the Antrea Controller.
+Before K8s v1.20, null values (field not set) are not pruned, and a CR where a
+metav1.Time field is not set would fail OpenAPI validation (type string). The
+recommendation seems to be to use a pointer instead, and the field will be omitted when
+serializing.
+See https://github.com/kubernetes/kubernetes/issues/86811
+
+
+
+
+dataplaneTag
+
+byte
+
+
+
+
DataplaneTag is a tag to identify a traceflow session across Nodes.
+Generated with gen-crd-api-reference-docs
+on git commit cc441db.
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/api-reference.md b/content/docs/v2.2.0-alpha.2/docs/api-reference.md
new file mode 100644
index 00000000..821072b4
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/api-reference.md
@@ -0,0 +1,5 @@
+
+---
+---
+
+{{% include-html "api-reference.html" %}}
diff --git a/content/docs/v2.2.0-alpha.2/docs/api.md b/content/docs/v2.2.0-alpha.2/docs/api.md
new file mode 100644
index 00000000..4547cb66
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/api.md
@@ -0,0 +1,88 @@
+# Antrea API
+
+This document lists all the API resource versions currently or previously
+supported by Antrea, along with information related to their deprecation and
+removal when appropriate. It is kept up-to-date as we evolve the Antrea API.
+
+Starting with the v1.0 release, we decided to group all the Custom Resource
+Definitions (CRDs) defined by Antrea in a single API group, `crd.antrea.io`,
+instead of grouping CRDs logically in different API groups based on their
+purposes. The rationale for this change was to avoid proliferation of API
+groups. As a result, all resources in the `crd.antrea.io` are versioned
+individually, while before the v1.0 release, we used to have a single version
+number for all the CRDs in a given group: when introducing a new version of the
+API group, we would "move" all CRDs from the earlier version to the new version
+together. This explains why the tables below are presented differently for
+`crd.antrea.io` and for other API groups.
+
+For information about the Antrea API versioning policy, please refer to this
+[document](versioning.md).
+
+## Currently-supported
+
+### CRDs in `crd.antrea.io`
+
+These are the CRDs currently available in `crd.antrea.io`.
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+|---|---|---|---|---|
+| `AntreaAgentInfo` | v1beta1 | v1.0.0 | N/A | N/A |
+| `AntreaControllerInfo` | v1beta1 | v1.0.0 | N/A | N/A |
+| `BGPPolicy` | v1alpha1 | v2.1.0 | N/A | N/A |
+| `ClusterGroup` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ClusterNetworkPolicy` | v1beta1 | v1.13.0 | N/A | N/A |
+| `Egress` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ExternalEntity` | v1alpha2 | v1.0.0 | N/A | N/A |
+| `ExternalIPPool` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ExternalNode` | v1alpha1 | v1.8.0 | N/A | N/A |
+| `IPPool`| v1alpha2 | v1.4.0 | v2.0.0 | N/A |
+| `IPPool`| v1beta1 | v2.0.0 | N/A | N/A |
+| `Group` | v1beta1 | v1.13.0 | N/A | N/A |
+| `NetworkPolicy` | v1beta1 | v1.13.0 | N/A | N/A |
+| `NodeLatencyMonitor` | v1alpha1 | v2.1.0 | N/A | N/A |
+| `SupportBundleCollection` | v1alpha1 | v1.10.0 | N/A | N/A |
+| `Tier` | v1beta1 | v1.13.0 | N/A | N/A |
+| `Traceflow` | v1beta1 | v1.13.0 | N/A | N/A |
+| `TrafficControl` | v1alpha2 | v1.7.0 | N/A | N/A |
+
+### Other API groups
+
+These are the API group versions which are currently available when using Antrea.
+
+| API group | API version | API Service? | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+|---|---|---|---|---|---|
+| `controlplane.antrea.io` | `v1beta2` | Yes | v1.0.0 | N/A | N/A |
+| `stats.antrea.io` | `v1alpha1` | Yes | v1.0.0 | N/A | N/A |
+| `system.antrea.io` | `v1beta1` | Yes | v1.0.0 | N/A | N/A |
+
+## Previously-supported
+
+### Previously-supported API groups
+
+| API group | API version | API Service? | Introduced in | Deprecated in | Removed in |
+|---|---|---|---|---|---|
+| `core.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v0.11.0 | v0.11.0 |
+| `networking.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.3.0 | v0.10.0 | v1.2.0 |
+| `controlplane.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.10.0 | v0.11.0 | v1.3.0 |
+| `clusterinformation.antrea.tanzu.vmware.com` | `v1beta1` | No | v0.3.0 | v1.0.0 | v1.6.0 |
+| `core.antrea.tanzu.vmware.com` | `v1alpha2` | No | v0.11.0 | v1.0.0 | v1.6.0 |
+| `controlplane.antrea.tanzu.vmware.com` | `v1beta2` | Yes | v0.11.0 | v1.0.0 | v1.6.0 |
+| `ops.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v1.0.0 | v1.6.0 |
+| `security.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v1.0.0 | v1.6.0 |
+| `stats.antrea.tanzu.vmware.com` | `v1alpha1` | Yes | v0.10.0 | v1.0.0 | v1.6.0 |
+| `system.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.5.0 | v1.0.0 | v1.6.0 |
+
+### Previously-supported CRDs
+
+| CRD | CRD version | Introduced in | Deprecated in | Removed in |
+|---|---|---|---|---|
+| `ClusterGroup` | v1alpha2 | v1.0.0 | v1.1.0 | v2.0.0 |
+| `ClusterGroup` | v1alpha3 | v1.1.0 | v1.13.0 | v2.0.0 |
+| `ClusterNetworkPolicy` | v1alpha1 | v1.0.0 | v1.13.0 | v2.0.0 |
+| `Egress` | v1alpha2 | v1.0.0 | v1.13.0 | v2.0.0 |
+| `ExternalEntity` | v1alpha1 | v0.10.0 | v0.11.0 | v2.0.0 |
+| `ExternalIPPool` | v1alpha2 | v1.2.0 | v1.13.0 | v2.0.0 |
+| `Group` | v1alpha3 | v1.8.0 | v1.13.0 | v2.0.0 |
+| `NetworkPolicy` | v1alpha1 | v1.0.0 | v1.13.0 | v2.0.0 |
+| `Tier` | v1alpha1 | v1.0.0 | v1.13.0 | v2.0.0 |
+| `Traceflow` | v1alpha1 | v1.0.0 | v1.13.0 | v2.0.0 |
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/README.md b/content/docs/v2.2.0-alpha.2/docs/assets/README.md
new file mode 100644
index 00000000..61dc9ea7
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/README.md
@@ -0,0 +1,9 @@
+# Assets
+
+## SVG images
+
+The SVG images / diagrams in this directory have been created using
+[Inkscape](https://inkscape.org/) and exported as PNG files - which can be embedded in Markdown
+files. If you edit these images, please re-export them as PNG with a 300 dpi resolution. If you
+create new SVG images / diagrams for documentation, please check-in both the SVG source and the
+exported PNG file.
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/adopters/glasnostic-logo.png b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/glasnostic-logo.png
new file mode 100644
index 00000000..52f96a48
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/glasnostic-logo.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/adopters/terasky-logo.png b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/terasky-logo.png
new file mode 100644
index 00000000..d26875f4
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/terasky-logo.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/adopters/transwarp-logo.png b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/transwarp-logo.png
new file mode 100644
index 00000000..07254111
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/adopters/transwarp-logo.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg b/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg
new file mode 100644
index 00000000..4a3b1da7
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg
@@ -0,0 +1,913 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg.png b/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg.png
new file mode 100644
index 00000000..9aff76e7
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/antrea_overview.svg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/arch.svg b/content/docs/v2.2.0-alpha.2/docs/assets/arch.svg
new file mode 100644
index 00000000..a549e33f
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/arch.svg
@@ -0,0 +1,2076 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow-opaque-bg.png b/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow-opaque-bg.png
new file mode 100644
index 00000000..191e9d50
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow-opaque-bg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow.graffle b/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow.graffle
new file mode 100644
index 00000000..725d99a8
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/developer-workflow.graffle differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/flow_visibility.svg b/content/docs/v2.2.0-alpha.2/docs/assets/flow_visibility.svg
new file mode 100644
index 00000000..fdbb990a
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/flow_visibility.svg
@@ -0,0 +1,2538 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/hns_integration.svg b/content/docs/v2.2.0-alpha.2/docs/assets/hns_integration.svg
new file mode 100644
index 00000000..172b49e2
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/hns_integration.svg
@@ -0,0 +1,856 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/README.md b/content/docs/v2.2.0-alpha.2/docs/assets/logo/README.md
new file mode 100644
index 00000000..c1233bae
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/logo/README.md
@@ -0,0 +1,28 @@
+# Antrea Logos
+
+We provide the following 2 logos ("regular" and "stacked") in both SVG and PNG
+format. Use the one that works for you!
+
+## Regular SVG
+
+![Regular SVG](antrea_logo.svg)
+
+## Regular PNG Large
+
+![Regular PNG Large](antrea_logo_lrg.png)
+
+## Regular PNG Small
+
+![Regular PNG Small](antrea_logo_sml.png)
+
+## Stacked SVG
+
+![Stacked SVG](antrea_logo_stacked.svg)
+
+## Stacked PNG Large
+
+![Stacked PNG Large](antrea_logo_stacked_lrg.png)
+
+## Stacked PNG Small
+
+![Stacked PNG Small](antrea_logo_stacked_sml.png)
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo.svg b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo.svg
new file mode 100644
index 00000000..55a22cc5
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo.svg
@@ -0,0 +1,70 @@
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_lrg.png b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_lrg.png
new file mode 100644
index 00000000..cc09b97d
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_lrg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_sml.png b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_sml.png
new file mode 100644
index 00000000..43ee4286
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_sml.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked.svg b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked.svg
new file mode 100644
index 00000000..194bd7d6
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked.svg
@@ -0,0 +1,71 @@
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_lrg.png b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_lrg.png
new file mode 100644
index 00000000..e4577bf1
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_lrg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_sml.png b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_sml.png
new file mode 100644
index 00000000..d7009f44
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/logo/antrea_logo_stacked_sml.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/node.svg b/content/docs/v2.2.0-alpha.2/docs/assets/node.svg
new file mode 100644
index 00000000..bdab8f9d
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/node.svg
@@ -0,0 +1,406 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/node.svg.png b/content/docs/v2.2.0-alpha.2/docs/assets/node.svg.png
new file mode 100644
index 00000000..e8b8b0ce
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/node.svg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline-external-node.svg b/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline-external-node.svg
new file mode 100644
index 00000000..63fee6dc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline-external-node.svg
@@ -0,0 +1,1216 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline.svg b/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline.svg
new file mode 100644
index 00000000..6630ac65
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/ovs-pipeline.svg
@@ -0,0 +1,2361 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/policy-only-cni.svg b/content/docs/v2.2.0-alpha.2/docs/assets/policy-only-cni.svg
new file mode 100644
index 00000000..3adb5746
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/policy-only-cni.svg
@@ -0,0 +1,138 @@
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg b/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg
new file mode 100644
index 00000000..85c13f2f
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg
@@ -0,0 +1,828 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg.png b/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg.png
new file mode 100644
index 00000000..54bc058c
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/service_walk.svg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/traffic_external_node.svg b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_external_node.svg
new file mode 100644
index 00000000..19e547ef
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_external_node.svg
@@ -0,0 +1,438 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg
new file mode 100644
index 00000000..a40396fc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg
@@ -0,0 +1,976 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg.png b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg.png
new file mode 100644
index 00000000..43064a68
Binary files /dev/null and b/content/docs/v2.2.0-alpha.2/docs/assets/traffic_walk.svg.png differ
diff --git a/content/docs/v2.2.0-alpha.2/docs/assets/windows_external_traffic.svg b/content/docs/v2.2.0-alpha.2/docs/assets/windows_external_traffic.svg
new file mode 100644
index 00000000..1339dcc2
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/assets/windows_external_traffic.svg
@@ -0,0 +1,386 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/bgp-policy.md b/content/docs/v2.2.0-alpha.2/docs/bgp-policy.md
new file mode 100644
index 00000000..09d30aba
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/bgp-policy.md
@@ -0,0 +1,230 @@
+# BGPPolicy
+
+## Table of Contents
+
+
+- [What is BGPPolicy?](#what-is-bgppolicy)
+- [Prerequisites](#prerequisites)
+- [The BGPPolicy resource](#the-bgppolicy-resource)
+ - [NodeSelector](#nodeselector)
+ - [LocalASN](#localasn)
+ - [ListenPort](#listenport)
+ - [Advertisements](#advertisements)
+ - [BGPPeers](#bgppeers)
+- [BGP router ID](#bgp-router-id)
+- [BGP Authentication](#bgp-authentication)
+- [Example Usage](#example-usage)
+ - [Combined Advertisements of Service, Pod, and Egress IPs](#combined-advertisements-of-service-pod-and-egress-ips)
+ - [Advertise Egress IPs to external BGP peers with more than one hop](#advertise-egress-ips-to-external-bgp-peers-with-more-than-one-hop)
+- [Using antctl](#using-antctl)
+- [Limitations](#limitations)
+
+
+## What is BGPPolicy?
+
+`BGPPolicy` is a custom resource that allows users to run a BGP process on selected Kubernetes Nodes and advertise
+Service IPs, Pod IPs, and Egress IPs to remote BGP peers, facilitating the integration of Kubernetes workloads with an
+external BGP-enabled network.
+
+## Prerequisites
+
+BGPPolicy was introduced in Antrea v2.1 as an alpha feature. A feature gate, `BGPPolicy`, must be enabled on antrea-agent
+in the `antrea-config` ConfigMap for the feature to work, like the following:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ BGPPolicy: true
+```
+
+## The BGPPolicy resource
+
+A BGPPolicy in Kubernetes is a Custom Resource Definition (CRD) object.
+
+The following manifest creates a BGPPolicy object. It will start a BGP process with ASN `64512`, listening on port `179`,
+on Nodes labeled with `bgp=enabled`. The process will advertise LoadBalancerIPs and ExternalIPs to a BGP peer at IP
+address `192.168.77.200`, which has ASN `65001` and listens on port `179`:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: BGPPolicy
+metadata:
+ name: example-bgp-policy
+spec:
+ nodeSelector:
+ matchLabels:
+ bgp: enabled
+ localASN: 64512
+ listenPort: 179
+ advertisements:
+ service:
+ ipTypes: [LoadBalancerIP, ExternalIP]
+ bgpPeers:
+ - address: 192.168.77.200
+ asn: 65001
+ port: 179
+```
+
+### NodeSelector
+
+The `nodeSelector` field selects which Kubernetes Nodes the BGPPolicy applies to based on the Node labels. The field is
+mandatory.
+
+**Note**: If multiple BGPPolicy objects select the same Node, the one with the earliest creation time will be chosen
+as the effective BGPPolicy.
+
+### LocalASN
+
+The `localASN` field defines the Autonomous System Number (ASN) that the local BGP process uses. The available private
+ASN range is `64512-65535`. The field is mandatory.
+
+### ListenPort
+
+The `listenPort` field specifies the port on which the BGP process listens. The default value is 179. The valid port
+range is `1-65535`.
+
+### Advertisements
+
+The `advertisements` field configures which IPs are advertised to BGP peers.
+
+- `pod`: Specifies how to advertise Pod IPs. The Node IPAM Pod CIDRs will be advertised by setting `pod:{}`. Note that
+ IPs allocated by Antrea Flexible IPAM are not yet supported.
+- `egress`: Specifies how to advertise Egress IPs. All Egress IPs will be advertised by setting `egress:{}`. A Node will
+ only advertise Egress IPs which are local (i.e., assigned to the Node).
+- `service`: Specifies how to advertise Service IPs. The `ipTypes` field lists the types of Service IPs to be advertised,
+ which can include `ClusterIP`, `ExternalIP`, and `LoadBalancerIP`.
+ - All Nodes can advertise all ClusterIPs, respecting `internalTrafficPolicy`. If `internalTrafficPolicy` is set to
+ `Local`, a Node will only advertise ClusterIPs with at least one local Endpoint.
+ - All Nodes can advertise all ExternalIPs and LoadBalancerIPs, respecting `externalTrafficPolicy`. If
+ `externalTrafficPolicy` is set to `Local`, a Node will only advertise IPs with at least one local Endpoint.
+
+### BGPPeers
+
+The `bgpPeers` field lists the BGP peers to which the advertisements are sent.
+
+- `address`: The IP address of the BGP peer.
+- `asn`: The Autonomous System Number of the BGP peer.
+- `port`: The port number on which the BGP peer listens. The default value is 179.
+- `multihopTTL`: The Time To Live (TTL) value used in BGP packets sent to the BGP peer, with a range of 1 to 255.
+ The default value is 1.
+- `gracefulRestartTimeSeconds`: Specifies how long the BGP peer waits for the BGP session to re-establish after a
+ restart before deleting stale routes, with a range of 1 to 3600 seconds. The default value is 120 seconds.
+
+## BGP router ID
+
+The BGP router identifier (ID) is a 4-byte field that is usually represented as an IPv4 address. Antrea uses the following
+steps to choose the BGP router ID:
+
+1. If the `node.antrea.io/bgp-router-id` annotation is present on the Node and its value is a valid IPv4 address string,
+ we will use the provided value.
+2. Otherwise, for an IPv4-only or dual-stack Kubernetes cluster, the Node's IPv4 address (assigned to the transport
+ interface) is used.
+3. Otherwise, for IPv6-only clusters, a 32-bit integer will be generated by hashing the Node's name, then converted to the
+ string representation of an IPv4 address.
+
+After this selection process, the `node.antrea.io/bgp-router-id` annotation is added or updated as necessary to reflect
+the selected BGP router ID.
+
+The router ID is generated once and will not be updated if the Node configuration changes (e.g., if the Node's IPv4 address is updated).
+
+## BGP Authentication
+
+BGP authentication ensures that BGP sessions are established and maintained only with legitimate peers. Users can provide
+authentication passwords for different BGP peering sessions by storing them in a Kubernetes Secret. The Secret must
+be defined in the same Namespace as Antrea (`kube-system` by default) and must be named `antrea-bgp-passwords`.
+
+By default, this Secret is not created, and BGP authentication is considered unconfigured for all BGP peers. If the
+Secret is created like in the following example, each entry should have a key that is the concatenated string of the BGP
+peer IP address and ASN (e.g., `192.168.77.100-65000`, `2001:db8::1-65000`), with the value being the password for that
+BGP peer. If a given BGP peer does not have a corresponding key in the Secret data, then authentication is considered
+disabled for that peer.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: antrea-bgp-passwords
+ namespace: kube-system
+stringData:
+ 192.168.77.100-65000: "password"
+ 2001:db8::1-65000: "password"
+type: Opaque
+```
+
+## Example Usage
+
+### Combined Advertisements of Service, Pod, and Egress IPs
+
+In this example, we will advertise Service IPs of types LoadBalancerIP and ExternalIPs, along with Pod CIDRs and Egress
+IPs from the selected Nodes to multiple remote BGP peers.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: BGPPolicy
+metadata:
+ name: advertise-all-ips
+spec:
+ nodeSelector:
+ matchLabels:
+ bgp: enabled
+ localASN: 64512
+ listenPort: 179
+ advertisements:
+ service:
+ ipTypes: [LoadBalancerIP, ExternalIP]
+ pod: {}
+ egress: {}
+ bgpPeers:
+ - address: 192.168.77.200
+ asn: 65001
+ port: 179
+ - address: 192.168.77.201
+ asn: 65001
+ port: 179
+```
+
+### Advertise Egress IPs to external BGP peers with more than one hop
+
+In this example, we configure the BGPPolicy to advertise Egress IPs from selected Nodes to a remote BGP peer located
+multiple hops away from the cluster. It's crucial to set the `multihopTTL` to a value equal to or greater than the
+number of hops, allowing BGP packets to traverse multiple hops to reach the peer.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: BGPPolicy
+metadata:
+ name: advertise-all-egress-ips
+spec:
+ nodeSelector:
+ matchLabels:
+ bgp: enabled
+ localASN: 64512
+ listenPort: 179
+ advertisements:
+ egress: {}
+ bgpPeers:
+ - address: 192.168.78.201
+ asn: 65001
+ port: 179
+ multihopTTL: 2
+```
+
+## Using antctl
+
+Please refer to the corresponding [antctl page](antctl.md#bgp-commands).
+
+## Limitations
+
+- The routes received from remote BGP peers will not be installed. Therefore, you must ensure that the path from Nodes
+ to the remote BGP network is properly configured and routable. This involves configuring your network infrastructure
+ to handle the routing of traffic between your Kubernetes cluster and the remote BGP network.
+- Only Linux Nodes are supported. The feature has not been validated on Windows Nodes, though theoretically it can work
+ with Windows Nodes.
+- Advanced BGP features such as BGP communities, route filtering, route reflection, confederations, and other BGP policy
+ mechanisms defined in BGP RFCs are not supported.
diff --git a/content/docs/v2.2.0-alpha.2/docs/configuration.md b/content/docs/v2.2.0-alpha.2/docs/configuration.md
new file mode 100644
index 00000000..d4eab25d
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/configuration.md
@@ -0,0 +1,90 @@
+# Configuration
+
+## antrea-agent
+
+### Command line options
+
+```text
+--config string The path to the configuration file
+--v Level number for the log level verbosity
+```
+
+Use `antrea-agent -h` to see complete options.
+
+### Configuration
+
+The `antrea-agent` configuration file specifies the agent configuration
+parameters. For all the agent configuration parameters of a Linux Node, refer to
+this [base configuration file](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/charts/antrea/conf/antrea-agent.conf).
+For all the configuration parameters of a Windows Node, refer to this [base
+configuration file](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/charts/antrea-windows/conf/antrea-agent.conf).
+
+## antrea-controller
+
+### Command line options
+
+```text
+--config string The path to the configuration file
+--v Level number for the log level verbosity
+```
+
+Use `antrea-controller -h` to see complete options.
+
+### Configuration
+
+The `antrea-controller` configuration file specifies the controller
+configuration parameters. For all the controller configuration parameters,
+refer to this [base configuration file](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/charts/antrea/conf/antrea-controller.conf).
+
+## CNI configuration
+
+A typical CNI configuration looks like this:
+
+```json
+ {
+ "cniVersion":"0.3.0",
+ "name": "antrea",
+ "plugins": [
+ {
+ "type": "antrea",
+ "ipam": {
+ "type": "host-local"
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ },
+ {
+ "type": "bandwidth",
+ "capabilities": {
+ "bandwidth": true
+ }
+ }
+ ]
+ }
+```
+
+You can also set the MTU (for the Pod's network interface) in the CNI
+configuration using `"mtu": `. When using an `antrea.yml` manifest, the
+MTU should be set with the `antrea-agent` `defaultMTU` configuration parameter,
+which will apply to all Pods and the host gateway interface on every Node. It is
+strongly discouraged to set the `"mtu"` field in the CNI configuration to a
+value that does not match the `defaultMTU` parameter, as it may lead to
+performance degradation or packet drops.
+
+Antrea enables portmap and bandwidth CNI plugins by default to support `hostPort`
+and traffic shaping functionalities for Pods respectively. In order to disable
+them, remove the corresponding section from `antrea-cni.conflist` in the Antrea
+manifest. For example, removing the following section disables portmap plugin:
+
+```json
+{
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+}
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/cherry-picks.md b/content/docs/v2.2.0-alpha.2/docs/contributors/cherry-picks.md
new file mode 100644
index 00000000..5e2c2db4
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/cherry-picks.md
@@ -0,0 +1,64 @@
+# Cherry-picks to release branches
+
+Some Pull Requests (PRs) which fix bugs in the main branch of Antrea can be
+identified as good candidates for backporting to currently maintained release
+branches (using a Git [cherry-pick](https://git-scm.com/docs/git-cherry-pick)),
+so that they can be included in subsequent patch releases. If you have authored
+such a PR (thank you!!!), one of the Antrea maintainers may comment on your PR
+to ask for your assistance with that process. This document provides the steps
+you can use to cherry-pick your change to one or more release branches, with the
+help of the [cherry-pick script][cherry-pick-script].
+
+For information about which changes are good candidates for cherry-picking,
+please refer to our [versioning
+policy](../versioning.md#minor-releases-and-patch-releases).
+
+## Prerequisites
+
+* A PR which was approved and merged into the main branch.
+* The PR was identified as a good candidate for backporting by an Antrea
+ maintainer: they will label the PR with `action/backport` and comment a list
+ of release branches to which the patch should be backported (example:
+ [`release-1.0`](https://github.com/antrea-io/antrea/tree/release-1.0)).
+* Have the [Github CLI](https://cli.github.com/) installed (version >= 1.3) and
+ make sure you authenticate yourself by running `gh auth`.
+* Your own fork of the Antrea repository, and a clone of this fork with two
+ remotes: the `origin` remote tracking your fork and the `upstream` remote
+ tracking the upstream Antrea repository. If you followed our recommended
+ [Github Workflow], this should already be the case.
+
+## Cherry-pick your changes
+
+* Set the GITHUB_USER environment variable.
+* _Optional_ If your remote names do not match our recommended [Github
+ Workflow], you must set the `UPSTREAM_REMOTE` and `FORK_REMOTE` environment
+ variables.
+* Run the [cherry-pick script][cherry-pick-script]
+
+ This example applies a main branch PR #2134 to the remote branch
+ `upstream/release-1.0`:
+
+ ```shell
+ hack/cherry-pick-pull.sh upstream/release-1.0 2134
+ ```
+
+ If the cherry-picked PR does not apply cleanly against an old release branch,
+ the script will let you resolve conflicts manually. This is one of the reasons
+ why we ask contributors to backport their own bug fixes, as their
+ participation is critical in case of such a conflict.
+
+The script will create a PR on Github for you, which will automatically be
+labelled with `kind/cherry-pick`. This PR will go through the normal testing
+process, although it should be very quick given that the original PR was already
+approved and merged into the main branch. The PR should also go through normal
+CI testing. In some cases, a few CI tests may fail because we do not have
+dedicated CI infrastructure for past Antrea releases. If this happens, the PR
+will be merged despite the presence of CI test failures.
+
+You will need to run the cherry pick script separately for each release branch
+you need to cherry-pick to. Typically, cherry-picks should be applied to all
+[maintained](../versioning.md#release-cycle) release branches for which the fix
+is applicable.
+
+[cherry-pick-script]: ../../hack/cherry-pick-pull.sh
+[Github Workflow]: ../../CONTRIBUTING.md#github-workflow
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/code-generation.md b/content/docs/v2.2.0-alpha.2/docs/contributors/code-generation.md
new file mode 100644
index 00000000..1037af84
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/code-generation.md
@@ -0,0 +1,49 @@
+# Code and Documentation Generation
+
+## CNI
+
+Antrea uses [protoc](https://github.com/protocolbuffers/protobuf), [protoc-gen-go](https://github.com/protocolbuffers/protobuf-go)
+and [protoc-gen-go-grpc](https://github.com/grpc/grpc-go) to generate CNI gRPC service code.
+
+If you make any change to [cni.proto](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/pkg/apis/cni/v1beta1/cni.proto), you can
+re-generate the code by invoking `make codegen`. Make sure that you commit your changes before
+running the script, and that you commit the generated code after running it.
+
+## Extension API Resources and Custom Resource Definitions
+
+Antrea extends Kubernetes API with an extension APIServer and Custom Resource Definitions, and uses
+[k8s.io/code-generator
+(release-1.18)](https://github.com/kubernetes/code-generator/tree/release-1.18) to generate clients,
+informers, conversions, protobuf codecs and other helpers. The resource definitions and their
+generated codes are located in the conventional paths: `pkg/apis/` for internal
+types and `pkg/apis//` for versioned types and `pkg/client/clientset` for
+clients.
+
+If you make any change to any `types.go`, you can re-generate the code by invoking `make
+codegen`. Make sure that you commit your changes before running the script, and that you commit the
+generated code after running it.
+
+## Mocks
+
+Antrea uses the [GoMock](https://github.com/uber-go/mock) framework for its unit tests.
+
+If you add or modify interfaces that need to be mocked, please add or update `MOCKGEN_TARGETS` in
+[update-codegen-dockerized.sh](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/update-codegen-dockerized.sh) accordingly. All the mocks for a
+given package will typically be generated in a sub-package called `testing`. For example, the mock
+code for the interface `Baz` defined in the package `pkg/foo/bar` will be generated to
+`pkg/foo/bar/testing/mock_bar.go`, and you can import it via `pkg/foo/bar/testing`.
+
+Same as above, you can re-generate the mock source code (with `mockgen`) by invoking `make codegen`.
+Make sure that you commit your changes before running the script, and that you commit the
+generated code after running it.
+
+## Generated Documentation
+
+[Prometheus integration document](../prometheus-integration.md) contains a list
+of supported metrics, which could be affected by third party component
+changes. The collection of metrics is done from a running Kind deployment, in
+order to reflect the current list of metrics which is exposed by Antrea
+Controller and Agents.
+
+To regenerate the metrics list within the document, use [make-metrics-doc.sh](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/make-metrics-doc.sh)
+with document location as a parameter.
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/docker-desktop-alternatives.md b/content/docs/v2.2.0-alpha.2/docs/contributors/docker-desktop-alternatives.md
new file mode 100644
index 00000000..bdb0ca96
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/docker-desktop-alternatives.md
@@ -0,0 +1,67 @@
+# Docker Desktop Alternatives
+
+The Antrea build system relies on Docker to build container images, which can
+then be used to test Antrea locally. As an Antrea developer, if you run `make`,
+`docker build` will be invoked to build the `antrea-ubuntu` container image. On
+Linux, Docker Engine (based on moby) runs natively, but if you use macOS or
+Windows for Antrea development, Docker needs to run inside a Linux Virtual
+Machine (VM). This VM is typically managed by [Docker
+Desktop](https://www.docker.com/products/docker-desktop). Starting January 31
+2022, Docker Desktop requires a per user paid subscription for professional use
+in "large" companies (more than 250 employees or more than $10 million in annual
+revenue). See for details. For developers
+who contribute to Antrea as an employee of such a company (and not in their own
+individual capacity), it is no longer possible to use Docker Desktop to build
+(and possibly run) Antrea Docker images locally, unless they have a Docker
+subscription.
+
+For contributors who do not have a Docker subscription, we recommend the
+following Docker Desktop alternatives.
+
+## Colima (macOS)
+
+[Colima](https://github.com/abiosoft/colima) is a UI built with
+[Lima](https://github.com/lima-vm/lima). It supports running a container runtime
+(docker, containerd or kubernetes) on macOS, inside a Lima VM. Major benefits
+of Colima include its ability to be used as a drop-in replacement for Docker
+Desktop and its ability to coexist with Docker Desktop on the same macOS
+machine.
+
+To install and run Colima, follow these steps:
+
+* `brew install colima`
+* `colima start` to start Colima (the Linux VM) with the default
+ configuration. Check the Colima documentation for configuration options. By
+ default, Colima will use the Docker runtime. This means that you can keep
+ using the `docker` CLI and that no changes are required to build Antrea.
+ - we recommend increasing the CPU and memory resources allocated to the VM as
+ by default it only has 2 vCPUs and 2GiB of memory. For example, you can use:
+ `colima start --cpu 4 --memory 8`. Otherwise, building Antrea container
+ images may be slow, and your Kind clusters may run out of memory.
+* `docker context list` and check that the `colima` context is selected. You can
+ use `docker context use desktop-linux` to go back to Docker Desktop.
+* `make` to build Antrea locally. Check that the `antrea-ubuntu` image is
+ available by listing all images with `docker images`.
+
+We have validated that Kind clusters with Antrea can run inside Colima without
+any issue (confirmed for IPv4, IPv6 single-stack clusters, as well as for
+dual-stack clusters).
+
+At any time, you can stop the VM with `colima stop` and restart it with `colima
+start` (you do not need to specify configuration flags again, unless you want to
+change the current values). You can also check the status of the VM with `colima
+ls`.
+
+While it should be possible to have multiple Colima instances simultaneously,
+this is not something that we have tested.
+
+## Rancher Desktop (macOS and Windows)
+
+Rancher Desktop is another possible alternative to Docker Desktop, which
+supports Windows in addition to macOS. On macOS, it also uses Lima as the Linux
+VM. Two major differences with Colima are that Rancher Desktop will always run
+Kubernetes, and that Rancher Desktop uses the
+[`nerdctl`](https://github.com/containerd/nerdctl) UI for container management
+instead of `docker`. However, the `nerdctl` and `docker` UIs are supposed to be
+compatible, so in theory it should be possible to alias `docker` to `nerdctl`
+and keep using the Antrea build system as is (to be tested).
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/eks-terraform.md b/content/docs/v2.2.0-alpha.2/docs/contributors/eks-terraform.md
new file mode 100644
index 00000000..eb2a2b86
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/eks-terraform.md
@@ -0,0 +1,62 @@
+# Deploying EKS with Antrea
+
+Antrea may run in networkPolicyOnly mode in AKS and EKS clusters. This document
+describes the steps to create an EKS cluster with Antrea using terraform.
+
+## Common Prerequisites
+
+1. To run EKS cluster, install and configure AWS cli(either version 1 or 2), see
+ , and
+
+2. Install aws-iam-authenticator, see
+
+3. Install terraform, see
+4. You must already have ssh key-pair created. This key pair will be used to access worker Node via ssh.
+
+```bash
+ls ~/.ssh/
+id_rsa id_rsa.pub
+```
+
+## Create an EKS cluster via terraform
+
+Ensures that you have permission to create EKS cluster, and have already
+created EKS cluster role as well as worker Node profile.
+
+```bash
+export TF_VAR_eks_cluster_iam_role_name=YOUR_EKS_ROLE
+export TF_VAR_eks_iam_instance_profile_name=YOUR_EKS_WORKER_NODE_PROFILE
+export TF_VAR_eks_key_pair_name=YOUR_KEY_PAIR_TO_ACCESS_WORKER_NODE
+```
+
+Where
+
+- TF_VAR_eks_cluster_iam_role_name may be created by following these
+ [instructions](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html#create-service-role)
+- TF_VAR_eks_iam_instance_profile_name may be created by following these
+ [instructions](https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html#create-worker-node-role)
+- TF_VAR_eks_key_pair_name is the aws key pair name you have configured by following these
+ [instructions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws),
+ using ssh-pair created in Prerequisites item 4
+
+Create EKS cluster
+
+```bash
+./hack/terraform-eks.sh create
+```
+
+Interact with EKS cluster
+
+```bash
+./hack/terraform-eks.sh kubectl ... # issue kubectl commands to EKS cluster
+./hack/terraform-eks.sh load ... # load local built images to EKS cluster
+./hack/terraform-eks.sh destroy # destroy EKS cluster
+```
+
+and worker Node can be accessed with ssh via their external IPs.
+
+Apply Antrea to EKS cluster
+
+```bash
+ ./hack/generate-manifest.sh --encap-mode networkPolicyOnly | ~/terraform/eks kubectl apply -f -
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/github-labels.md b/content/docs/v2.2.0-alpha.2/docs/contributors/github-labels.md
new file mode 100644
index 00000000..712671af
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/github-labels.md
@@ -0,0 +1,130 @@
+# GitHub Label List
+
+We use GitHub labels to perform issue triage, track and report on development
+progress, plan roadmaps, and automate issue grooming.
+
+To ensure that contributing new issues and PRs remains straight-forward, we would
+like to keep the labels required for submission to a minimum. The remaining
+labels will be added either by automation or manual grooming by other
+contributors and maintainers.
+
+The labels in this list originated within Kubernetes at
+.
+
+## Labels that apply to issues or PRs
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| api-review | Categorizes an issue or PR as actively needing an API review. | Any |
+| area/api | Issues or PRs related to an API | Any |
+| area/blog | Issues or PRs related to blog entries | Any |
+| area/build-release | Issues or PRs related to building and releasing | Any |
+| area/component/antctl | Issues or PRs related to the command line interface component | Any |
+| area/component/agent | Issues or PRs related to the Agent component | Any |
+| area/component/cni | Issues or PRs related to the cni component | Any |
+| area/component/controller | Issues or PRs related to the Controller component | Any |
+| area/component/flow-aggregator | Issues or PRs related to the Flow Aggregator component | Any |
+| area/dependency | Issues or PRs related to dependency changes | Any |
+| area/endpoint/identity | Issues or PRs related to endpoint identity | Any |
+| area/endpoint/selection | Issues or PRs related to endpoint selection | Any |
+| area/endpoint/type | Issues or PRs related to endpoint type | Any |
+| area/flow-visibility | Issues or PRs related to flow visibility support in Antrea | Any |
+| area/flow-visibility/aggregation | Issues or PRs related to flow aggregation | Any |
+| area/flow-visibility/export | Issues or PRs related to the Flow Exporter functions in the Agent | Any |
+| area/github-membership | Categorizes an issue as a membership request to join the antrea-io Github organization | Any |
+| area/grouping | Issues or PRs related to ClusterGroup, Group API | Any |
+| area/ipam | Issues or PRs related to IP address management (IPAM) | Any |
+| area/interface | Issues or PRs related to network interfaces | Any |
+| area/licensing | Issues or PRs related to Antrea licensing | Any |
+| area/monitoring/auditing | Issues or PRs related to auditing | Any |
+| area/monitoring/health-performance | Issues or PRs related to health and performance monitoring | Any |
+| area/monitoring/logging | Issues or PRs related to logging | Any |
+| area/monitoring/mirroring | Issues or PRs related to mirroring | Any |
+| area/monitoring/traffic-analysis | Issues or PRs related to traffic analysis | Any |
+| area/multi-cluster | Issues or PRs related to multi cluster | Any |
+| area/network-policy | Issues or PRs related to network policy | Any |
+| area/network-policy/action | Issues or PRs related to network policy actions | Any |
+| area/network-policy/agent | Issues or PRs related to the network policy agents | Any |
+| area/network-policy/api | Issues or PRs related to the network policy API | Any |
+| area/network-policy/controller | Issues or PRs related to the network policy controller | Any |
+| area/network-policy/lifecycle | Issues or PRs related to the network policy lifecycle | Any |
+| area/network-policy/match | Issues or PRs related to matching packets | Any |
+| area/network-policy/precedence | Issues or PRs related to network policy precedence | Any |
+| area/ops | Issues or PRs related to features which support network operations and troubleshooting | Any |
+| area/ops/traceflow | Issues or PRs related to the Traceflow feature | Any |
+| area/ovs/openflow | Issues or PRs related to Open vSwitch Open Flow | Any |
+| area/ovs/ovsdb | Issues or PRs related to Open vSwitch database | Any |
+| area/OS/linux | Issues or PRs related to the Linux operating system | Any |
+| area/OS/windows | Issues or PRs related to the Windows operating system | Any |
+| area/provider/aws | Issues or PRs related to aws provider | Any |
+| area/provider/azure | Issues or PRs related to azure provider | Any |
+| area/provider/gcp | Issues or PRs related to gcp provider | Any |
+| area/provider/vmware | Issues or PRs related to vmware provider | Any |
+| area/proxy | Issues or PRs related to proxy functions in Antrea | Any |
+| area/proxy/clusterip | Issues or PRs related to the implementation of ClusterIP Services | Any |
+| area/proxy/nodeport | Issues or PRs related to the implementation of NodePort Services | Any |
+| area/proxy/nodeportlocal | Issues or PRs related to the NodePortLocal feature | Any |
+| area/secondary-network | Issues or PRs related to support for secondary networks in Antrea | Any |
+| area/security/access-control | Issues or PRs related to access control | Any |
+| area/security/controlplane | Issues or PRs related to controlplane security | Any |
+| area/security/dataplane | Issues or PRs related to dataplane security | Any |
+| area/test | Issues or PRs related to unit and integration tests. | Any |
+| area/test/community | Issues or PRs related to community testing | Any |
+| area/test/e2e | Issues or PRs related to Antrea specific end-to-end testing. | Any |
+| area/test/infra | Issues or PRs related to test infrastructure (Jenkins configuration, Ansible playbook, Kind wrappers, ...) | Any |
+| area/transit/addressing | Issues or PRs related to IP addressing category (unicast, multicast, broadcast, anycast) | Any |
+| area/transit/egress | Issues or PRs related to Egress (SNAT for traffic egressing the cluster) | Any |
+| area/transit/encapsulation | Issues or PRs related to encapsulation | Any |
+| area/transit/encryption | Issues or PRs related to transit encryption (IPsec, SSL) | Any |
+| area/transit/ipv6 | Issues or PRs related to IPv6 | Any |
+| area/transit/qos | Issues or PRs related to transit qos or policing | Any |
+| area/transit/routing | Issues or PRs related to routing and forwarding | Any |
+| area/transit/bgp | Issues or PRs related to BGP support | Any |
+| kind/api-change | Categorizes issue or PR as related to adding, removing, or otherwise changing an API. | Any |
+| kind/bug | Categorizes issue or PR as related to a bug. | Any |
+| kind/cherry-pick | Categorizes issue or PR as related to the cherry-pick of a bug fix from the main branch to a release branch | Any |
+| kind/cleanup | Categorizes issue or PR as related to cleaning up code, process, or technical debt | Any |
+| kind/deprecation | Categorizes issue or PR as related to feature marked for deprecation | Any |
+| kind/design | Categorizes issue or PR as related to design | Any |
+| kind/documentation | Categorizes issue or PR as related to a documentation. | Any |
+| kind/failing-test | Categorizes issue or PR as related to a consistently or frequently failing test | Any |
+| kind/feature | Categorizes issue or PR as related to a new feature. | Any |
+| kind/release | Categorizes a PR used to create a new release (with CHANGELOG and VERSION updates) | Maintainers |
+| kind/support | Categorizes issue or PR as related to a support question. | Any |
+| kind/task | Categorizes issue or PR as related to a routine task that needs to be performed. | Any |
+| lifecycle/active | Indicates that an issue or PR is actively being worked on by a contributor. | Any |
+| lifecycle/frozen | Indicates that an issue or PR should not be auto-closed due to staleness. | Any |
+| lifecycle/stale | Denotes an issue or PR has remained open with no activity and has become stale. | Any |
+| priority/awaiting-more-evidence | Lowest priority. Possibly useful, but not yet enough support to actually get it done. | Any |
+| priority/backlog | Higher priority than priority/awaiting-more-evidence. | Any |
+| priority/critical-urgent | Highest priority. Must be actively worked on as someone's top priority right now. | Any |
+| priority/important-longterm | Important over the long term, but may not be staffed and/or may need multiple releases to complete. | Any |
+| priority/import-soon | Must be staffed and worked on either currently, or very soon, ideally in time for the next release. | Any |
+| ready-to-work | Indicates that an issue or PR has been sufficiently triaged and prioritized and is now ready to work. | Any |
+| size/L | Denotes a PR that changes 100-499 lines, ignoring generated files. | Any |
+| size/M | Denotes a PR that changes 30-99 lines, ignoring generated files.| Any |
+| size/S | Denotes a PR that changes 10-29 lines, ignoring generated files.| Any |
+| size/XL | Denotes a PR that changes 500+ lines, ignoring generated files.| Any |
+| size/XS | Denotes a PR that changes 0-9 lines, ignoring generated files.| Any |
+| triage/duplicate | Indicates an issue is a duplicate of other open issue. | Humans |
+| triage/needs-information | Indicates an issue needs more information in order to work on it. | Humans |
+| triage/not-reproducible | Indicates an issue can not be reproduced as described. | Humans |
+| triage/unresolved | Indicates an issue that can not or will not be resolved. | Humans |
+| action/backport | Indicates a PR that requires backports. | Humans |
+| action/release-note | Indicates a PR that should be included in release notes. | Humans |
+
+## Labels that apply only to issues
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| good first issue | Denotes an issue ready for a new contributor, according to the "help wanted" [guidelines](issue-management.md#good-first-issues-and-help-wanted). | Anyone |
+| help wanted | Denotes an issue that needs help from a contributor. Must meet "help wanted" [guidelines](issue-management.md#good-first-issues-and-help-wanted). | Anyone |
+
+## Labels that apply only to PRs
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| approved | Indicates a PR has been approved by owners in accordance with [GOVERNANCE.md](../../GOVERNANCE.md) guidelines. | Maintainers |
+| do-not-merge/hold | Indicates a PR should not be merged because someone has issued a /hold command | Merge Bot |
+| do-not-merge/work-in-progress | Indicates that a PR should not be merged because it is a work in progress. | Merge Bot |
+| lgtm | Indicates that a PR is ready to be merged. | Merge Bot |
diff --git a/content/docs/v2.2.0-alpha.2/docs/contributors/issue-management.md b/content/docs/v2.2.0-alpha.2/docs/contributors/issue-management.md
new file mode 100644
index 00000000..f7ff5546
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/contributors/issue-management.md
@@ -0,0 +1,406 @@
+# Issue Management
+
+This document further describes the developer workflow and how issues are
+managed as introduced in [CONTRIBUTING.md](../../CONTRIBUTING.md). Please read
+[CONTRIBUTING.md](../../CONTRIBUTING.md) first before proceeding.
+
+
+- [Developer Workflow Overview](#developer-workflow-overview)
+- [Creating New Issues and PRs](#creating-new-issues-and-prs)
+- [Good First Issues and Help Wanted](#good-first-issues-and-help-wanted)
+- [Issue and PR Triage Process](#issue-and-pr-triage-process)
+ - [Issue Triage](#issue-triage)
+ - [PR Triage](#pr-triage)
+- [Working an Issue](#working-an-issue)
+- [Issue and PR Labels](#issue-and-pr-labels)
+ - [Issue Kinds](#issue-kinds)
+ - [API Change](#api-change)
+ - [Bug](#bug)
+ - [Cleanup](#cleanup)
+ - [Feature](#feature)
+ - [Deprecation](#deprecation)
+ - [Task](#task)
+ - [Design](#design)
+ - [Documentation](#documentation)
+ - [Failing Test](#failing-test)
+ - [Support](#support)
+ - [Area](#area)
+ - [Size](#size)
+ - [Triage](#triage)
+ - [Lifecycle](#lifecycle)
+ - [Priority](#priority)
+
+
+## Developer Workflow Overview
+
+The purpose of this workflow is to formalize a lightweight set of processes that
+will optimize issue triage and management which will lead to better release
+predictability and community responsiveness for support and feature
+enhancements. Additionally, Antrea must prioritize issues to ensure interlock
+alignment and compatibility with other projects including Kubernetes. The
+processes described here will aid in accomplishing these goals.
+
+![developer workflow overview](../assets/developer-workflow-opaque-bg.png)
+
+## Creating New Issues and PRs
+
+Creating new issues and PRs is covered in detail in
+[CONTRIBUTING.md](../../CONTRIBUTING.md).
+
+## Good First Issues and Help Wanted
+
+We use `good first issue` and `help wanted` labels to indicate issues we would
+like contribution on. These two labels were borrowed from the Kubernetes project
+and represent the same context as described in [Help Wanted and Good First Issue
+Labels](https://www.kubernetes.dev/docs/guide/help-wanted/).
+
+We do not yet support the automation mentioned in the Kubernetes help guild.
+
+To summarize:
+
+* `good first issue` -- issues intended for first time contributors. Members
+ should keep an eye out for these pull requests and shepherd it through our
+ processes.
+* `help wanted` -- issues that represent clearly laid out tasks that are
+ generally tractable for new contributors. The solution has already been
+ designed and requires no further discussion from the community. This label
+ indicates we need additional contributors to help move this task along.
+
+## Issue and PR Triage Process
+
+When new issues or PRs are created, the maintainers must triage the issue
+to ensure the information is valid, complete, and properly categorized and
+prioritized.
+
+### Issue Triage
+
+An issue is triaged in the following way:
+
+1. Ensure the issue is not a duplicate. Do a quick search against existing
+ issues to determine if the issue has been or is currently being worked on. If
+ you suspect the issue is a duplicate, apply the [`triage/duplicate`](#triage) label.
+2. Ensure that the issue has captured all the information required for the given
+ issue [`kind/`](#issue-kinds). If information or context is needed, apply the
+ `triage/needs-information`.
+3. Apply any missing [`area/`](#area) labels. An issue can relate to more
+ than one area.
+4. Apply a [`priority/`](#priority) label. This may require further
+ discussion during the community meeting if the priority cannot be determined.
+ If undetermined, do not apply a priority. Issues with unassigned priorities
+ will be selected for review.
+5. Apply a [`size/`](#size) label if known. This may require further
+ discussion, a research spike or review by the assigned contributor who will
+ be working on this issue. This is only an estimate of the complexity and size
+ of the issue.
+
+Once an issue has been triaged, a comment should be left for original submitter
+to respond to any applied triage labels.
+
+If all triage labels have been addressed and the issue is ready to be worked,
+apply the label `ready-to-work` so the issue can be assigned to a milestone and
+worked by a contributor.
+
+If it is determined an issue will not be resolved or not fixed, apply the
+`triage/unresolved` label and leave a reason in a comment for the original
+submitter. Unresolved issues can be closed after giving the original submitter
+an opportunity to appeal the reason supplied.
+
+### PR Triage
+
+A PR is triaged in the following way:
+
+1. Automation will ensure that the submitter has signed the [CLA](../../CONTRIBUTING.md#cla).
+2. Automation will run CI tests against the submission to ensure compliance.
+3. Apply [`size/`](#size) label to the submission. (TODO: we plan to
+ automate this with a GitHub action and apply size based on lines of code).
+4. Ensure that the PR references an existing issue (exceptions to this should be
+ rare). If the PR is missing this or needs any additional information, note it
+ in the comment and apply the `triage/needs-information` label.
+5. The PR should have the same `area/`, `kind/`, and `lifecycle/` labels as that of
+ the referenced issue. (TODO: we plan to automate this with a GitHub action
+ and apply labels automatically)
+
+## Working an Issue
+
+When starting work on an issue, assign the issue to yourself if it has not
+already been assigned and apply the `lifecycle/active` label to signal that the
+issue is actively being worked on.
+
+Making code changes is covered in detail in
+[CONTRIBUTING.md](../../CONTRIBUTING.md#github-workflow).
+
+If the issue kind is a `kind/bug`, ensure that the issue can be reproduced. If
+not, assign the `triage/not-reproducible` and request feedback from the original
+submitter.
+
+## Issue and PR Labels
+
+This section describes the label metadata we use to track issues and PRs. For a
+definitive list of all GitHub labels used within this project, please see
+[github-labels.md](github-labels.md).
+
+### Issue Kinds
+
+An issue kind describes the kind of contribution being requested or submitted.
+In some cases, the kind will also influence how the issue or PR is triaged and
+worked.
+
+#### API Change
+
+A `kind/api-change` label categorizes an issue or PR as related to adding, removing,
+or otherwise changing an API.
+
+All API changes must be reviewed by maintainers in addition to the standard code
+review and approval workflow.
+
+To create an API change issue or PR:
+
+* label your issue or PR with `kind/api-change`
+* describe in the issue or PR body which API you are changing, making sure to include
+ * API endpoint and schema (endpoint, Version, APIGroup, etc.)
+ * Is this a breaking change?
+ * Can new or older clients opt-in to this API?
+ * Is there a fallback? What are the implications of not supporting this API version?
+ * How is an upgrade handled? If automatically, we need to ensure proper tests
+ are created. If we require a manual upgrade procedure, this needs to be
+ noted so that the release notes and docs can be updated appropriately.
+
+Before starting any work on an API change it is important that you have proper
+review and approval from the project maintainers.
+
+#### Bug
+
+A `kind/bug` label categorizes an issue or PR as related to a bug.
+
+Any problem encountered when building, configuring, or running Antrea could be a
+potential case for submitting a bug.
+
+To create a bug issue or bug fix PR:
+
+* label your issue or PR with `kind/bug`
+* describe your bug in the issue or PR body making sure to include:
+ * version of Antrea
+ * version of Kubernetes
+ * version of OS and any relevant environment or system configuration
+ * steps and/or configuration to reproduce the bug
+ * any tests that demonstrate the presence of the bug
+* please attach any relevant logs or diagnostic output
+
+#### Cleanup
+
+A `kind/cleanup` label categorizes an issue or PR as related to cleaning up
+code, process, or technical debt.
+
+To create a cleanup issue or PR:
+
+* label your issue or PR with `kind/cleanup`
+* describe your cleanup in the issue or PR body being sure to include
+ * what is being cleaned
+ * for what reason it is being cleaned (technical debt, deprecation, etc.)
+
+Examples of a cleanup include:
+
+* Adding comments to describe code execution
+* Making code easier to read and follow
+* Removing dead code related to deprecated features or implementations
+
+#### Feature
+
+A `kind/feature` label categorizes an issue or PR as related to a new feature.
+
+To create a feature issue or PR:
+
+* label your issue or PR with `kind/feature`
+* describe your proposed feature in the issue or PR body being sure to include
+ * a use case for the new feature
+ * list acceptance tests for the new feature
+ * describe any dependencies for the new feature
+* depending on the size and impact of the feature
+ * a design proposal may need to be submitted
+ * the feature may need to be discussed in the community meeting
+
+Before you begin work on your feature it is import to ensure that you have
+proper review and approval from the project maintainers.
+
+Examples of a new feature include:
+
+* Adding a new set of metrics for enabling additional telemetry.
+* Adding additional supported transport layer protocol options for network policy.
+* Adding support for IPsec.
+
+#### Deprecation
+
+A `kind/deprecation` label categorizes an issue or PR as related to feature
+marked for deprecation.
+
+To create a deprecation issue or PR:
+
+* label your issue or PR with `kind/deprecation`
+* title the issue or PR with the feature you are deprecating
+* describe the deprecation in the issue or PR body making sure to:
+ * explain why the feature is being deprecated
+ * discuss time-to-live for the feature and when deprecation will take place
+ * discuss any impacts to existing APIs
+
+#### Task
+
+A `kind/task` label categorizes an issue or PR as related to a "routine"
+maintenance task for the project, e.g. upgrading a software dependency or
+enabling a new CI job.
+
+To create a task issue or PR:
+
+* label your issue or PR with `kind/task`
+* describe your task in the issue or PR body, being sure to include the reason
+ for the task and the possible impacts of the change
+
+#### Design
+
+A `kind/design` label categorizes issue or PR as related to design.
+
+A design issue or PR is for discussing larger architectural and design proposals.
+Approval of a design proposal may result in multiple additional feature,
+api-change, or cleanup issues being created to implement the design.
+
+To create a design issue:
+
+* label your issue or PR with `kind/design`
+* describe the design in the issue or PR body
+
+Before creating additional issues or PRs that implement the proposed design it is
+important to get feedback and approval from the maintainers. Design feedback
+could include some of the following:
+
+* needs additional detail
+* no, this problem should be solved in another way
+* this is desirable but we need help completing other issues or PRs first; then we will
+ consider this design
+
+#### Documentation
+
+A `kind/documentation` label categorizes issue or PR as related to a
+documentation.
+
+To create a documentation issue or PR:
+
+* label your issue or PR with `kind/documentation`
+* title the issue with a short description of what you are documenting
+* provide a brief summary in the issue or PR body of what you are documenting. In some
+ cases, it might be useful to include a checklist of changed documentation
+ files to indicate your progress.
+
+#### Failing Test
+
+A `kind/failing-test` label categorizes issue or PR as related to a consistently
+or frequently failing test.
+
+To create a failing test issue or PR:
+
+* label your issue or PR with `kind/failing-test`
+
+TODO: As more automation is used in the continuous integration pipeline, we will
+be able to automatically generate an issue for failing tests.
+
+#### Support
+
+A `kind/support` label categorizes issue as related to a support request.
+
+To create a support issue or PR:
+
+* label your issue or PR with `kind/support`
+* title the issue or PR with a short description of your support request
+* answer all of the questions in the support issue template
+* to provide comprehensive information about your cluster that will be useful in
+ identifying and resolving the issue, you may want to consider producing a
+ ["support bundle"](../antctl.md/#collecting-support-information) and uploading it
+ to a publicly-accessible location. **Be aware that the generated support
+ bundle includes a lot of information, including logs, so please ensure that
+ you do not share anything sensitive.**
+
+### Area
+
+Area labels begin with `area/` and identify areas of interest or functionality
+to which an issue relates. An issue or PR could have multiple areas. These labels are
+used to sort issues and PRs into categories such as:
+
+* operating systems
+* cloud platform,
+* functional area,
+* operating or legal area (i.e., licensing),
+* etc.
+
+A list of areas is maintained in [`github-labels.md`](github-labels.md).
+
+An area may be changed, added or deleted during issue or PR triage.
+
+### Size
+
+Size labels begin with `size/` and estimate the relative complexity or work
+required to resolve an issue or PR.
+
+TODO: For submitted PRs, the size can be automatically calculated and the
+appropriate label assigned.
+
+Size labels are specified according to lines of code; however, some issues may
+not relate to lines of code submission such as documentation. In those cases,
+use the labels to apply an equivalent complexity or size to the task at hand.
+
+Size labels include:
+
+* `size/XS` -- denotes a extra small issue, or PR that changes 0-9 lines, ignoring generated files
+* `size/S` -- denotes a small issue, or PR that changes 10-29 lines, ignoring generated files
+* `size/M` -- denotes a medium issue, or PR that changes 30-99 lines, ignoring generated files
+* `size/L` -- denotes a large issue, or PR that changes 100-499 lines, ignoring generated files
+* `size/XL` -- denotes a very large issue, or PR that changes 500+ lines, ignoring generated files
+
+Size labels are defined in [`github-labels.md`](github-labels.md).
+
+### Triage
+
+As soon as new issues are submitted, they must be triaged until they are ready to
+work. The maintainers may apply the following labels during the issue triage
+process:
+
+* `triage/duplicate` -- indicates an issue is a duplicate of other open issue
+* `triage/needs-information` -- indicates an issue needs more information in order to work on it
+* `triage/not-reproducible` -- indicates an issue can not be reproduced as described
+* `triage/unresolved` -- indicates an issue that can not or will not be resolved
+
+Triage labels are defined in [`github-labels.md`](github-labels.md).
+
+### Lifecycle
+
+To track the state of an issue, the following labels will be assigned.
+
+* `lifecycle/active` -- indicates that an issue or PR is actively being worked on by a contributor
+* `lifecycle/frozen` -- indicates that an issue or PR should not be auto-closed due to staleness
+* `lifecycle/stale` -- denotes an issue or PR has remained open with no activity and has become stale
+
+The following schedule will be used to determine an issue's lifecycle:
+
+* after 180 days of inactivity, an issue will be automatically marked as `lifecycle/stale`
+* after an extra 180 days of inactivity, an issue will be automatically closed
+* any issue marked as `lifecycle/frozen` will prevent automatic transitions to
+ stale and prevent auto-closure
+* commenting on an issue will remove the `lifecycle/stale` label
+
+Issue lifecycle management ensures that the project backlog remains fresh and
+relevant. Project maintainers and contributors will need to revisit issues to
+periodically assess their relevance and progress.
+
+TODO: Additional CI automation (GitHub actions) will be used to automatically
+apply and manage some of these lifecycle labels.
+
+Lifecycle labels are defined in [`github-labels.md`](github-labels.md).
+
+### Priority
+
+A priority label signifies the overall priority that should be given to an
+issue or PR. Priorities are considered during backlog grooming and help to
+determine the number of features included in a milestone.
+
+* `priority/awaiting-more-evidence` -- lowest priority. Possibly useful, but not yet enough support to actually get it done.
+* `priority/backlog` -- higher priority than priority/awaiting-more-evidence.
+* `priority/critical-urgent` -- highest priority. Must be actively worked on as someone's top priority right now.
+* `priority/important-longterm` -- important over the long term, but may not be staffed and/or may need multiple releases to complete.
+* `priority/import-soon` -- must be staffed and worked on either currently, or very soon, ideally in time for the next release.
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/README.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/README.md
new file mode 100644
index 00000000..8419bae9
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/README.md
@@ -0,0 +1,147 @@
+# Using Antrea with Fluentd
+
+This guide will describe how to use Project Antrea with
+[Fluentd](https://github.com/fluent/fluentd-kubernetes-daemonset),
+in order for efficient audit logging.
+In this scenario, Antrea is used for the default network,
+[Elasticsearch](https://www.elastic.co/) is used for the default storage,
+and [Kibana](https://www.elastic.co/kibana/) dashboard is used for visualization.
+
+
+- [Prerequisites](#prerequisites)
+- [Practical steps](#practical-steps)
+ - [Step 1: Deploying Antrea](#step-1-deploying-antrea)
+ - [Step 2: Deploy Elasticsearch and Kibana Dashboard](#step-2-deploy-elasticsearch-and-kibana-dashboard)
+ - [Step 3: Configure Custom Fluentd Plugins](#step-3-configure-custom-fluentd-plugins)
+ - [Step 4: Deploy Fluentd DaemonSet](#step-4-deploy-fluentd-daemonset)
+ - [Step 5: Visualize with Kibana Dashboard](#step-5-visualize-with-kibana-dashboard)
+- [Email Alerting](#email-alerting)
+
+
+## Prerequisites
+
+The only prerequisites are:
+
+* a K8s cluster (Linux Nodes) running a K8s version supported by Antrea.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+## Practical steps
+
+### Step 1: Deploying Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to
+[getting-started.md](../../getting-started.md). To deploy the latest version of
+Antrea, use:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+You may also choose a [released Antrea
+version](https://github.com/antrea-io/antrea/releases).
+
+### Step 2: Deploy Elasticsearch and Kibana Dashboard
+
+Fluentd supports multiple [output plugins](https://www.fluentd.org/plugins).
+Details will be discussed in [Step 4](#step-4-deploy-fluentd-daemonset), but
+by default, log records are collected by Fluentd DaemonSet and sent to Elasticsearch.
+A Kibana Dashboard can then be used to visualize the data. The YAML file for
+deployment is included in the `resources` directory. To deploy Elasticsearch
+and Kibana, run:
+
+```bash
+kubectl apply -f docs/cookbooks/fluentd/resources/kibana-elasticsearch.yml
+```
+
+### Step 3: Configure Custom Fluentd Plugins
+
+The architecture of Fluentd is a pipeline from input-> parser-> buffer->
+output-> formatter, many of these are plugins that could be configured to
+fit users’ different use cases.
+
+To specify custom input plugins and parsers, modify `./resources/kubernetes.conf`
+and create a ConfigMap with the following command. Later, direct Fluentd
+DaemonSet to refer to that ConfigMap. To see more variations of custom
+configuration, refer to [Fluentd inputs](https://docs.fluentd.org/input).
+This cookbook uses the [tail](https://docs.fluentd.org/input/tail)
+input plugin to monitor the audit logging files for Antrea-native policies
+on every K8s Node.
+
+```bash
+kubectl create configmap fluentd-conf --from-file=docs/cookbooks/fluentd/resources/kubernetes.conf --namespace=kube-logging
+```
+
+### Step 4: Deploy Fluentd DaemonSet
+
+Fluentd deployment includes RBAC and DaemonSet. Fluentd will collect logs
+from cluster components, so permissions need to be granted first through
+RBAC. In `fluentd.yml`, we create a ServiceAccount, and use a ClusterRole
+and a ClusterRoleBinding to grant it permissions to read, list and watch
+Pods in cluster scope.
+
+In the DaemonSet configuration, specify Elasticsearch host, port and scheme,
+as they are required by the Elasticsearch output plugin.
+In [Fluentd official documentation](https://github.com/fluent/fluentd-kubernetes-daemonset),
+output plugins are specified in `fluent.conf` depending on the chosen image.
+To change output plugins, choose a different image and specify it in `./resources/fluentd.yml`.
+When choosing image version, note that the current Elasticsearch version
+specified in `resources/kibana-elasticsearch.yml` is 7.8.0 and that the major
+Elasticsearch version must match between the 2 files.
+
+```bash
+kubectl apply -f docs/cookbooks/fluentd/resources/fluentd.yml
+```
+
+### Step 5: Visualize with Kibana Dashboard
+
+Navigate to `http://[NodeIP]: 30007` and create an index pattern with "fluentd-*".
+Go to `http://[NodeIP]: 30007/app/kibana#/discover` to see the results as below.
+
+{{< img src="https://downloads.antrea.io/static/07062023/audit-logging-fluentd-kibana.png" width="900" alt="Audit Logging Fluentd Kibana" >}}
+
+## Email Alerting
+
+Kibana dashboard supports creating alerts with the logs in this
+[guide](https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html).
+This [documentation](https://docs.fluentd.org/how-to-guides/splunk-like-grep-and-alert-email)
+also provides a detailed guide for email alerting when using td-agent
+(the stable version of Fluentd and preconfigured).
+
+For this cookbook with custom Fluentd configuration, modify and add the following
+code to `./resources/kubernetes.conf`, then update ConfigMap in
+[Step 3: Configure Custom Fluentd Plugins](#step-3-configure-custom-fluentd-plugins).
+
+```editorconfig
+
+ @type grepcounter
+ count_interval 3 # The time window for counting errors (in secs)
+ input_key code # The field to apply the regular expression
+ regexp ^5\d\d$ # The regular expression to be applied
+ threshold 1 # The minimum number of erros to trigger an alert
+ add_tag_prefix error_ANPxx # Generate tags like "error_ANPxx.antrea-networkpolicy"
+
+
+
+ @type copy
+
+ @type stdout # Print to stdout for debugging
+
+
+ @type mail
+ host smtp.gmail.com # Change this to your SMTP server host
+ port 587 # Normally 25/587/465 are used for submission
+ user USERNAME # Use your username to log in
+ password PASSWORD # Use your login password
+ enable_starttls_auto true # Use this option to enable STARTTLS
+ from example@antrea.com # Set the sender address
+ to alert@example.com # Set the recipient address
+ subject 'Antrea Native Policy Error'
+ message Total ANPxx error count: %s\n\nPlease check Antrea Native Policy feature ASAP
+ message_out_keys count # Use the "count" field to replace "%s" above
+
+
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/_index.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/_index.md
new file mode 100644
index 00000000..15878bff
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/fluentd/_index.md
@@ -0,0 +1,4 @@
+---
+---
+
+{{% include-md README.md %}}
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/README.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/README.md
new file mode 100644
index 00000000..52427bbc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/README.md
@@ -0,0 +1,175 @@
+# Using Antrea with IDS
+
+This guide will describe how to use Project Antrea with threat detection
+engines, in order to provide network-based intrusion detection service to your
+Pods. In this scenario, Antrea is used for the default Pod network. For the sake
+of this guide, we will use [Suricata](https://suricata.io/) as the threat
+detection engine, but similar steps should apply for other engines as well.
+
+The solution works by configuring a TrafficControl resource applying to specific
+Pods. Traffic originating from the Pods or destined for the Pods is mirrored,
+and then inspected by Suricata to provide threat detection. Suricata is
+configured with IDS mode in this example, but it can also be configured with
+IPS/inline mode to proactively drop the traffic determined to be malicious.
+
+
+- [Prerequisites](#prerequisites)
+- [Practical steps](#practical-steps)
+ - [Step 1: Deploy Antrea](#step-1-deploy-antrea)
+ - [Step 2: Configure TrafficControl resource](#step-2-configure-trafficcontrol-resource)
+ - [Step 3: Deploy Suricata as a DaemonSet](#step-3-deploy-suricata-as-a-daemonset)
+- [Testing](#testing)
+
+
+## Prerequisites
+
+The general prerequisites are:
+
+* a K8s cluster running a K8s version supported by Antrea.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+The [TrafficControl](../../traffic-control.md) capability was added in Antrea
+version 1.7. Therefore, an Antrea version >= v1.7.0 should be used to configure
+Pod traffic mirroring.
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+## Practical steps
+
+### Step 1: Deploy Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to [getting-started.md](../../getting-started.md).
+As of now, the `TrafficControl` feature gate is disabled by default, you will
+need to enable it like the following command.
+
+To deploy the latest version of Antrea, use:
+
+```bash
+curl -s https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml | \
+ sed "s/.*TrafficControl:.*/ TrafficControl: true/" | \
+ kubectl apply -f -
+```
+
+You may also choose a [released Antrea
+version](https://github.com/antrea-io/antrea/releases).
+
+### Step 2: Configure TrafficControl resource
+
+To replicate Pod traffic to Suricata for analysis, create a TrafficControl with
+the `Mirror` action, and set the `targetPort` to an OVS internal port that
+Suricata will capture traffic from. This cookbook uses `tap0` as the port name
+and performs intrusion detection for Pods with the `app=web` label:
+
+```bash
+cat <.
+
+As the TrafficControl resource configured in the second step mirrors traffic to
+`tap0`, we run Suricata in the host network and specify the network interface to
+`tap0`.
+
+```yaml
+spec:
+ hostNetwork: true
+ containers:
+ - name: suricata
+ image: jasonish/suricata:latest
+ command:
+ - /usr/bin/suricata
+ - -i
+ - tap0
+```
+
+Suricata uses Signatures (rules) to trigger alerts. We use the default ruleset
+installed at `/var/lib/suricata/rules` of the image `jasonish/suricata`.
+
+The directory `/var/log/suricata` contains alert events. We mount the directory
+as a `hostPath` volume to expose and persist them on the host:
+
+```yaml
+spec:
+ containers:
+ - name: suricata
+ volumeMounts:
+ - name: host-var-log-suricata
+ mountPath: /var/log/suricata
+ volumes:
+ - name: host-var-log-suricata
+ hostPath:
+ path: /var/log/suricata
+ type: DirectoryOrCreate
+```
+
+To deploy Suricata, run:
+
+```bash
+kubectl apply -f docs/cookbooks/ids/resources/suricata.yml
+```
+
+## Testing
+
+To test the IDS functionality, you can create a Pod with the `app=web` label,
+using the following command:
+
+```bash
+kubectl create deploy web --image nginx:1.21.6
+```
+
+Let's log into the Node that the test Pod runs on and start `tail` to see
+updates to the alert log `/var/log/suricata/fast.log`:
+
+```bash
+tail -f /var/log/suricata/fast.log
+```
+
+You can then generate malicious requests to trigger alerts. For ingress traffic,
+you can fake a web application attack against the Pod with the following command
+(assuming that the Pod IP is 10.10.2.3):
+
+```bash
+curl http://10.10.2.3/dlink/hwiz.html
+```
+
+The following output should now be seen in the log:
+
+```text
+05/17/2022-04:29:51.717452 [**] [1:2008942:8] ET POLICY Dlink Soho Router Config Page Access Attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 10.10.2.1:48600 -> 10.10.2.3:80
+```
+
+For egress traffic, you can `kubectl exec` into the Pods and generate malicious
+requests against external web server with the following command:
+
+```bash
+kubectl exec deploy/web -- curl -s http://testmynids.org/uid/index.html
+```
+
+The following output should now be seen in the log:
+
+```text
+05/17/2022-04:36:46.706373 [**] [1:2013028:6] ET POLICY curl User-Agent Outbound [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} 10.10.2.3:55132 -> 65.8.161.92:80
+05/17/2022-04:36:46.708833 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 65.8.161.92:80 -> 10.10.2.3:55132
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/_index.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/_index.md
new file mode 100644
index 00000000..15878bff
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/ids/_index.md
@@ -0,0 +1,4 @@
+---
+---
+
+{{% include-md README.md %}}
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/README.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/README.md
new file mode 100644
index 00000000..ce7d7fb3
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/README.md
@@ -0,0 +1,381 @@
+# Using Antrea with Multus
+
+This guide will describe how to use Project Antrea with
+[Multus](https://github.com/k8snetworkplumbingwg/multus-cni), in order to attach multiple
+network interfaces to Pods. In this scenario, Antrea is used for the default
+network, i.e. it is the CNI plugin which provisions the "primary" network
+interface ("eth0") for each Pod. For the sake of this guide, we will use the
+[macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)
+CNI plugin to provision secondary network interfaces for selected Pods, but
+similar steps should apply for other plugins as well,
+e.g. [ipvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/ipvlan).
+
+## Prerequisites
+
+The general prerequisites are:
+
+* a K8s cluster (Linux Nodes) running a K8s version supported by Antrea. At the
+ time of writing, we recommend version 1.16 or later. Typically the cluster
+ needs to be running on a network infrastructure that you control. For example,
+ using macvlan networking will not work on public clouds like AWS.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+The Antrea IPAM capability for secondary network was added in Antrea version
+1.7. To leverage Antrea IPAM for IP assignment of secondary networks, an Antrea
+version >= 1.7.0 should be used. There is no Antrea version requirement for
+other IPAM options.
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+For the sake of this guide, we will use macvlan in "bridge" mode, which supports
+the creation of multiple subinterfaces on one parent interface, and connects
+them all using a bridge. Macvlan in "bridge" mode requires the network to be
+able to handle "promiscuous mode", as the same physical interface / virtual
+adapter ends up being assigned multiple MAC addresses. When using a virtual
+network for the Nodes, some configuration changes are usually required, which
+depend on the virtualization technology. For example:
+
+* when using VirtualBox and [Internal
+ Networking](https://www.virtualbox.org/manual/ch06.html#network_internal), set
+ the `Promiscuous Mode` to `Allow All`
+* when using VMware Fusion, enable "promiscuous mode" in the guest (Node) for
+ the appropriate interface (e.g. using `ifconfig`); this may prompt for your
+ password on the host unless you uncheck `Require authentication to enter
+ promiscuous mode` in `Preferences ... > Network`
+
+This needs to be done for every Node VM, so it's best if you can automate this
+when provisioning your VMs.
+
+### Suggested test cluster
+
+If you need to create a K8s cluster to test this guide, we suggest you create
+one by following [these
+steps](https://github.com/antrea-io/antrea/tree/main/test/e2e#creating-the-test-kubernetes-cluster-with-vagrant). You
+will need to use a slightly modified Vagrantfile, which you can find
+[here](test/Vagrantfile). Note that this Vagrantfile will create 3 VMs on your
+machine, and each VM will be allocated 2GB of memory, so make sure you have
+enough memory available. You can create the cluster with the following steps:
+
+```bash
+git clone https://github.com/antrea-io/antrea.git
+cd antrea
+cp docs/cookbooks/multus/test/Vagrantfile test/e2e/infra/vagrant/
+cd test/e2e/infra/vagrant
+./provision.sh
+```
+
+The last command will take around 10 to 15 minutes to complete. After that, your
+cluster is ready and you can set the `KUBECONFIG` environment variable in order
+to use `kubectl`:
+
+```bash
+export KUBECONFIG=`pwd`/infra/vagrant/playbook/kube/config
+kubectl cluster-info
+```
+
+The cluster that you have created by following these steps is the one we will
+use as an example in this guide.
+
+## Practical steps
+
+### Step 1: Deploying Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to [getting-started.md](../../getting-started.md).
+You can deploy the latest version of Antrea with
+[the manifest](https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml).
+You may also choose a [released Antrea version](https://github.com/antrea-io/antrea/releases).
+
+To leverage Antrea IPAM to assign IP addresses for the secondary network, you
+need to edit the Antrea deployment manifest and enable the `AntreaIPAM` feature
+gate for both `antrea-controller` and `antrea-agent`, and then deploy Antrea
+with:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+If you choose other IPAM options like DHCP or Whereabouts, you can just deploy
+Antrea with the Antrea deployment manifest without modification:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+### Step 2: Deploy Multus as a DaemonSet
+
+```bash
+git clone https://github.com/k8snetworkplumbingwg/multus-cni && cd multus-cni
+cat ./deployments/multus-daemonset-thick-plugin.yml | kubectl apply -f -
+```
+
+### Step 3: Create an `IPPool` and a `NetworkAttachmentDefinition`
+
+With Antrea IPAM, the subnet and IP ranges for the secondary network are defined
+with an Antrea `IPPool` CR. To learn more information about Antrea IPAM for
+secondary network, please refer to the [Antrea IPAM documentation](../../antrea-ipam.md#ipam-for-secondary-network).
+
+```bash
+cat <
+samplepod-7956c4498-9dz98 1/1 Running 0 68s 10.10.1.12 k8s-node-worker-1
+samplepod-7956c4498-ghrdg 1/1 Running 0 68s 10.10.1.13 k8s-node-worker-1
+samplepod-7956c4498-n65bn 1/1 Running 0 68s 10.10.2.12 k8s-node-worker-2
+samplepod-7956c4498-q6vp2 1/1 Running 0 68s 10.10.1.11 k8s-node-worker-1
+samplepod-7956c4498-xztf4 1/1 Running 0 68s 10.10.2.11 k8s-node-worker-2
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-65v6m -- ip addr
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+3: eth0@if18: mtu 1450 qdisc noqueue state UP group default
+ link/ether c2:ce:36:6b:ba:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 10.10.2.10/24 brd 10.10.2.255 scope global eth0
+ valid_lft forever preferred_lft forever
+4: net1@if4: mtu 1500 qdisc noqueue state UP group default
+ link/ether be:a0:35:f2:08:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 192.168.78.205/24 brd 192.168.78.255 scope global net1
+ valid_lft forever preferred_lft forever
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-9dz98 -- ip addr
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+3: eth0@if20: mtu 1450 qdisc noqueue state UP group default
+ link/ether 92:8f:8a:1d:a0:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 10.10.1.12/24 brd 10.10.1.255 scope global eth0
+ valid_lft forever preferred_lft forever
+4: net1@if4: mtu 1500 qdisc noqueue state UP group default
+ link/ether 22:6e:b1:0a:f3:ab brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 192.168.78.202/24 brd 192.168.78.255 scope global net1
+ valid_lft forever preferred_lft forever
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-9dz98 -- ping -c 3 192.168.78.205
+PING 192.168.78.205 (192.168.78.205) 56(84) bytes of data.
+64 bytes from 192.168.78.205: icmp_seq=1 ttl=64 time=0.846 ms
+64 bytes from 192.168.78.205: icmp_seq=2 ttl=64 time=0.410 ms
+64 bytes from 192.168.78.205: icmp_seq=3 ttl=64 time=0.507 ms
+
+--- 192.168.78.205 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2013ms
+rtt min/avg/max/mdev = 0.410/0.587/0.846/0.186 ms
+```
+
+## Overview of a test cluster Node
+
+The diagram below shows an overview of a K8s Node when using the [test cluster]
+using [DHCP] for IPAM, and following all the steps above. For the sake of
+completeness, we show the DHCP server running on that Node, but as we use a
+Deployment with a single replica, the server may be running on any worker Node
+in the cluster.
+
+{{< img src="assets/testbed-multus-macvlan.svg" width="900" alt="Test cluster Node" >}}
+
+## Using [Whereabouts] for IPAM
+
+If you do not already have a DHCP server for the underlying parent network and
+you find that deploying one in-cluster is impractical, you may want to consider
+using [Whereabouts] to assign IP addresses to the secondary interfaces. When
+using [Whereabouts], follow steps 1 and 2 above, along with step 4 if you want
+the Nodes to be able to communicate with the Pods using the secondary
+network.
+
+The next step is to install the [Whereabouts] plugin as follows:
+
+```bash
+git clone https://github.com/dougbtv/whereabouts && cd whereabouts
+kubectl apply -f ./doc/daemonset-install.yaml -f ./doc/whereabouts.cni.cncf.io_ippools.yaml
+```
+
+Then create a NetworkAttachmentDefinition like the one below, after ensuring
+that `"master"` matches the name of the parent interface on the Nodes, and that
+the `range` and `exclude` configuration parameters are correct for your cluster
+(in particular, make sure that you exclude IP addresses assigned to Nodes). If
+you are using our [test cluster], you can use the NetworkAttachmentDefinition
+below as is.
+
+```bash
+cat <
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile
new file mode 100644
index 00000000..c5823977
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile
@@ -0,0 +1,33 @@
+# Copyright 2022 Antrea Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+FROM ubuntu:22.04 AS cni-binary
+
+LABEL maintainer="Antrea "
+LABEL description="A Docker which runs the dhcp daemon from the containernetworking project."
+
+RUN apt-get update && \
+ apt-get install -y --no-install-recommends wget ca-certificates
+
+# Leading dot is required for the tar command below
+ENV CNI_PLUGINS="./dhcp"
+
+RUN mkdir -p /opt/cni/bin && \
+ wget -q -O - https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz | tar xz -C /opt/cni/bin $CNI_PLUGINS
+
+FROM ubuntu:22.04
+
+COPY --from=cni-binary /opt/cni/bin/* /usr/local/bin
+
+ENTRYPOINT ["dhcp", "daemon"]
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md
new file mode 100644
index 00000000..88aa0fcc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md
@@ -0,0 +1,16 @@
+# cni-dhcp-daemon
+
+This Docker image can be used to run the [DHCP daemon from the
+containernetworking
+project](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp).
+
+If you need to build a new version of the image and push it to Dockerhub, you
+can run the following:
+
+```bash
+docker build -t antrea/cni-dhcp-daemon:latest .
+docker push antrea/cni-dhcp-daemon:latest
+```
+
+The `docker push` command will fail if you do not have permission to push to the
+`antrea` Dockerhub repository.
diff --git a/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/test/Vagrantfile b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/test/Vagrantfile
new file mode 100644
index 00000000..e16b3564
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/cookbooks/multus/test/Vagrantfile
@@ -0,0 +1,70 @@
+VAGRANTFILE_API_VERSION = "2"
+
+NUM_WORKERS = 2
+
+MODE = "v4"
+K8S_POD_NETWORK_CIDR = "10.10.0.0/16"
+K8S_SERVICE_NETWORK_CIDR = "10.96.0.0/12"
+K8S_NODE_CP_GW_IP = "10.10.0.1"
+
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+ config.vm.box = "ubuntu/bionic64"
+
+ config.vm.provider "virtualbox" do |v|
+ v.memory = 2048
+ # 2 CPUS required to initialize K8s cluster with "kubeadm init"
+ v.cpus = 2
+
+ v.customize [
+ "modifyvm", :id,
+ "--nicpromisc3", "allow-all"
+ ]
+ end
+
+ groups = {
+ "controlplane" => ["k8s-node-control-plane"],
+ "workers" => ["k8s-node-worker-[1:#{NUM_WORKERS}]"],
+ }
+
+ config.vm.define "k8s-node-control-plane" do |node|
+ node.vm.hostname = "k8s-node-control-plane"
+ node_ip = "192.168.77.100"
+ node.vm.network "private_network", ip: node_ip
+ node.vm.network "private_network", ip: "192.168.78.100", virtualbox__intnet: true
+
+ node.vm.provision :ansible do |ansible|
+ ansible.playbook = "playbook/k8s.yml"
+ ansible.groups = groups
+ ansible.extra_vars = {
+ # Ubuntu bionic does not ship with python2
+ ansible_python_interpreter:"/usr/bin/python3",
+ node_ip: node_ip,
+ node_name: "k8s-node-control-plane",
+ k8s_pod_network_cidr: K8S_POD_NETWORK_CIDR,
+ k8s_service_network_cidr: K8S_SERVICE_NETWORK_CIDR,
+ k8s_api_server_ip: node_ip,
+ k8s_ip_family: MODE,
+ k8s_antrea_gw_ip: K8S_NODE_CP_GW_IP,
+ }
+ end
+ end
+
+ (1..NUM_WORKERS).each do |node_id|
+ config.vm.define "k8s-node-worker-#{node_id}" do |node|
+ node.vm.hostname = "k8s-node-worker-#{node_id}"
+ node_ip = "192.168.77.#{100 + node_id}"
+ node.vm.network "private_network", ip: node_ip
+ node.vm.network "private_network", ip: "192.168.78.#{100 + node_id}", virtualbox__intnet: true
+
+ node.vm.provision :ansible do |ansible|
+ ansible.playbook = "playbook/k8s.yml"
+ ansible.groups = groups
+ ansible.extra_vars = {
+ ansible_python_interpreter:"/usr/bin/python3",
+ node_ip: node_ip,
+ node_name: "k8s-node-worker-#{node_id}",
+ }
+ end
+ end
+ end
+end
diff --git a/content/docs/v2.2.0-alpha.2/docs/design/architecture.md b/content/docs/v2.2.0-alpha.2/docs/design/architecture.md
new file mode 100644
index 00000000..368e78ae
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/design/architecture.md
@@ -0,0 +1,391 @@
+# Antrea Architecture
+
+Antrea is designed to be Kubernetes-centric and Kubernetes-native. It focuses on
+and is optimized for networking and security of a Kubernetes cluster. Its
+implementation leverages Kubernetes and Kubernetes native solutions as much as
+possible.
+
+Antrea leverages Open vSwitch as the networking data plane. Open vSwitch is a
+high-performance programmable virtual switch that supports both Linux and
+Windows. Open vSwitch enables Antrea to implement Kubernetes Network Policies
+in a high-performance and efficient manner. Thanks to the "programmable"
+characteristic of Open vSwitch, Antrea is able to implement an extensive set
+of networking and security features and services on top of Open vSwitch.
+
+Some information in this document and in particular when it comes to the Antrea
+Agent is specific to running Antrea on Linux Nodes. For information about how
+Antrea is run on Windows Nodes, please refer to the [Windows design document](windows-design.md).
+
+## Components
+
+In a Kubernetes cluster, Antrea creates a Deployment that runs Antrea
+Controller, and a DaemonSet that includes two containers to run Antrea Agent
+and OVS daemons respectively, on every Node. The DaemonSet also includes an
+init container that installs the CNI plugin - `antrea-cni` - on the Node and
+ensures that the OVS kernel module is loaded and it is chained with the portmap
+and bandwidth CNI plugins. All Antrea Controller, Agent, OVS daemons, and
+`antrea-cni` bits are included in a single Docker image. Antrea also has a
+command-line tool called `antctl`.
+
+{{< img src="../assets/arch.svg" width="600" alt="Antrea Architecture Overview" >}}
+
+### Antrea Controller
+
+Antrea Controller watches NetworkPolicy, Pod, and Namespace resources from the
+Kubernetes API, computes NetworkPolicies and distributes the computed policies
+to all Antrea Agents. Right now Antrea Controller supports only a single
+replica. At the moment, Antrea Controller mainly exists for NetworkPolicy
+implementation. If you only care about connectivity between Pods but not
+NetworkPolicy support, you may choose not to deploy Antrea Controller at all.
+However, in the future, Antrea might support more features that require Antrea
+Controller.
+
+Antrea Controller leverages the [Kubernetes apiserver library](https://github.com/kubernetes/apiserver)
+to implement the communication channel to Antrea Agents. Each Antrea Agent
+connects to the Controller API server and watches the computed NetworkPolicy
+objects. Controller also exposes a REST API for `antctl` on the same HTTP
+endpoint. See more information about the Controller API server implementation
+in the [Controller API server section](#controller-api-server).
+
+#### Controller API server
+
+Antrea Controller leverages the Kubernetes apiserver library to implement its
+own API server. The API server implementation is customized and optimized for
+publishing the computed NetworkPolicies to Agents:
+
+- The API server keeps all the state in in-memory caches and does not require a
+datastore to persist the data.
+- It sends the NetworkPolicy objects to only those Nodes that need to apply the
+NetworkPolicies locally. A Node receives a NetworkPolicy if and only if the
+NetworkPolicy is applied to at least one Pod on the Node.
+- It supports sending incremental updates to the NetworkPolicy objects to
+Agents.
+- Messages between Controller and Agent are serialized using the Protobuf format
+for reduced size and higher efficiency.
+
+The Antrea Controller API server also leverages Kubernetes Service for:
+
+- Service discovery
+- Authentication and authorization
+
+The Controller API endpoint is exposed through a Kubernetes ClusterIP type
+Service. Antrea Agent gets the Service's ClusterIP from the Service environment
+variable and connects to the Controller API server using the ClusterIP. The
+Controller API server delegates authentication and authorization to the
+Kubernetes API - the Antrea Agent uses a Kubernetes ServiceAccount token to
+authenticate to the Controller, and the Controller API server validates the
+token and whether the ServiceAccount is authorized for the API request with the
+Kubernetes API.
+
+Antrea Controller also exposes a REST API for `antctl` using the API server HTTP
+endpoint. It leverages [Kubernetes API aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
+to enable `antctl` to reach the Antrea Controller API through the Kubernetes
+API - `antctl` connects and authenticates to the Kubernetes API, which will
+proxy the `antctl` API requests to the Antrea Controller. In this way, `antctl`
+can be executed on any machine that can reach the Kubernetes API, and it can
+also leverage the `kubectl` configuration (`kubeconfig` file) to discover the
+Kubernetes API and authentication information. See also the [antctl section](#antctl).
+
+### Antrea Agent
+
+Antrea Agent manages the OVS bridge and Pod interfaces and implements Pod
+networking with OVS on every Kubernetes Node.
+
+Antrea Agent exposes a gRPC service (`Cni` service) which is invoked by the
+`antrea-cni` binary to perform CNI operations. For each new Pod to be created on
+the Node, after getting the CNI `ADD` call from `antrea-cni`, the Agent creates
+the Pod's network interface, allocates an IP address, connects the interface to
+the OVS bridge and installs the necessary flows in OVS. To learn more about the
+OVS flows check out the [OVS pipeline doc](ovs-pipeline.md).
+
+Antrea Agent includes two Kubernetes controllers:
+
+- The Node controller watches the Kubernetes API server for new Nodes, and
+creates an OVS (Geneve / VXLAN / GRE / STT) tunnel to each remote Node.
+- The NetworkPolicy controller watches the computed NetworkPolicies from the
+Antrea Controller API, and installs OVS flows to implement the NetworkPolicies
+for the local Pods.
+
+Antrea Agent also exposes a REST API on a local HTTP endpoint for `antctl`.
+
+### OVS daemons
+
+The two OVS daemons - `ovsdb-server` and `ovs-vswitchd` run in a separate
+container, called `antrea-ovs`, of the Antrea Agent DaemonSet.
+
+### antrea-cni
+
+`antrea-cni` is the [CNI](https://github.com/containernetworking/cni) plugin
+binary of Antrea. It is executed by `kubelet` for each CNI command. It is a
+simple gRPC client which issues an RPC to Antrea Agent for each CNI command. The
+Agent performs the actual work (sets up networking for the Pod) and returns the
+result or an error to `antrea-cni`.
+
+### antctl
+
+`antctl` is a command-line tool for Antrea. At the moment, it can show basic
+runtime information for both Antrea Controller and Antrea Agent, for debugging
+purposes.
+
+When accessing the Controller, `antctl` invokes the Controller API to query the
+required information. As described earlier, `antctl` can reach the Controller
+API through the Kubernetes API, and have the Kubernetes API authenticate,
+authorize, and proxy the API requests to the Controller. `antctl` can be
+executed through `kubectl` as a `kubectl` plugin as well.
+
+When accessing the Agent, `antctl` connects to the Agent's local REST endpoint,
+and can only be executed locally in the Agent's container.
+
+### Antrea web UI
+
+Antrea also comes with a web UI, which can show the Controller and Agent's
+health and basic runtime information. The UI gets the Controller and Agent's
+information from the `AntreaControllerInfo` and `AntreaAgentInfo` CRDs (Custom
+Resource Definition) in the Kubernetes API. The CRDs are created by the Antrea
+Controller and each Antrea Agent to populate their health and runtime
+information.
+
+The Antrea web UI provides additional capabilities. Please refer to the [Antrea
+UI repository](https://github.com/antrea-io/antrea-ui) for more information.
+
+## Pod Networking
+
+### Pod interface configuration and IPAM
+
+On every Node, Antrea Agent creates an OVS bridge (named `br-int` by default),
+and creates a veth pair for each Pod, with one end being in the Pod's network
+namespace and the other connected to the OVS bridge. On the OVS bridge, Antrea
+Agent also creates an internal port - `antrea-gw0` by default - to be the gateway of
+the Node's subnet, and a tunnel port `antrea-tun0` which is for creating overlay
+tunnels to other Nodes.
+
+{{< img src="../assets/node.svg.png" width="300" alt="Antrea Node Network" >}}
+
+By default, Antrea leverages Kubernetes' `NodeIPAMController` to allocate a
+single subnet for each Kubernetes Node, and Antrea Agent on a Node allocates an
+IP for each Pod on the Node from the Node's subnet. `NodeIPAMController` sets
+the `podCIDR` field of the Kubernetes Node spec to the allocated subnet. Antrea
+Agent retrieves the subnets of Nodes from the `podCIDR` field. It reserves the
+first IP of the local Node's subnet to be the gateway IP and assigns it to the
+`antrea-gw0` port, and invokes the [host-local IPAM plugin](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local)
+to allocate IPs from the subnet to all Pods. A local Pod is assigned an IP
+when the CNI ADD command is received for that Pod.
+
+`NodeIPAMController` can run in `kube-controller-manager` context, or within
+the context of Antrea Controller.
+
+For every remote Node, Antrea Agent adds an OVS flow to send the traffic to that
+Node through the appropriate tunnel. The flow matches the packets' destination
+IP against each Node's subnet.
+
+In addition to Kubernetes NodeIPAM, Antrea also implements its own IPAM feature,
+which can allocate IPs for Pods from user-defined IP pools. For more
+information, please refer to the [Antrea IPAM documentation](../antrea-ipam.md).
+
+### Traffic walk
+
+{{< img src="../assets/traffic_walk.svg.png" width="600" alt="Antrea Traffic Walk" >}}
+
+* ***Intra-node traffic*** Packets between two local Pods will be forwarded by
+the OVS bridge directly.
+
+* ***Inter-node traffic*** Packets to a Pod on another Node will be first
+forwarded to the `antrea-tun0` port, encapsulated, and sent to the destination Node
+through the tunnel; then they will be decapsulated, injected through the `antrea-tun0`
+port to the OVS bridge, and finally forwarded to the destination Pod.
+
+* ***Pod to external traffic*** Packets sent to an external IP or the Nodes'
+network will be forwarded to the `antrea-gw0` port (as it is the gateway of the local
+Pod subnet), and will be routed (based on routes configured on the Node) to the
+appropriate network interface of the Node (e.g. a physical network interface for
+a baremetal Node) and sent out to the Node network from there. Antrea Agent
+creates an iptables (MASQUERADE) rule to perform SNAT on the packets from Pods,
+so their source IP will be rewritten to the Node's IP before going out.
+
+### ClusterIP Service
+
+Antrea supports two ways to implement Services of type ClusterIP - leveraging
+`kube-proxy`, or Antrea Proxy that implements load balancing for ClusterIP
+Service traffic with OVS.
+
+When leveraging `kube-proxy`, Antrea Agent adds OVS flows to forward the packets
+from a Pod to a Service's ClusterIP to the `antrea-gw0` port, then `kube-proxy`
+will intercept the packets and select one Service endpoint to be the
+connection's destination and DNAT the packets to the endpoint's IP and port. If
+the destination endpoint is a local Pod, the packets will be forwarded to the
+Pod directly; if it is on another Node the packets will be sent to that Node via
+the tunnel.
+
+{{< img src="../assets/service_walk.svg.png" width="600" alt="Antrea Service Traffic Walk" >}}
+
+`kube-proxy` can be used in any supported mode: iptables, IPVS or nftables.
+See the [Kubernetes Service Proxies documentation](https://kubernetes.io/docs/reference/networking/virtual-ips)
+for more details.
+
+When Antrea Proxy is enabled, Antrea Agent will add OVS flows that implement
+load balancing and DNAT for the ClusterIP Service traffic. In this way, Service
+traffic load balancing is done inside OVS together with the rest of the
+forwarding, and it can achieve better performance than using `kube-proxy`, as
+there is no extra overhead of forwarding Service traffic to the host's network
+stack and iptables processing. The Antrea Proxy implementation in Antrea Agent
+leverages some `kube-proxy` packages to watch and process Service Endpoints.
+
+### NetworkPolicy
+
+An important design choice Antrea took regarding the NetworkPolicy
+implementation is centralized policy computation. Antrea Controller watches
+NetworkPolicy, Pod, and Namespace resources from the Kubernetes API. It
+processes podSelectors, namespaceSelectors, and ipBlocks as follows:
+
+- PodSelectors directly under the NetworkPolicy spec (which define the Pods to
+which the NetworkPolicy is applied) will be translated to member Pods.
+- Selectors (podSelectors and namespaceSelectors) and ipBlocks in rules (which
+define the ingress and egress traffic allowed by this policy) will be mapped to
+Pod IP addresses / IP address ranges.
+
+Antrea Controller also computes which Nodes need to receive a NetworkPolicy.
+Each Antrea Agent receives only the computed policies which affect Pods running
+locally on its Node, and directly uses the IP addresses computed by the
+Controller to create OVS flows enforcing the specified NetworkPolicies.
+
+We see the following major benefits of the centralized computation approach:
+
+* Only one Antrea Controller instance needs to receive and process all
+NetworkPolicy, Pod, and Namespace updates, and compute podSelectors and
+namespaceSelectors. This has a much lower overall cost compared to watching
+these updates and performing the same complex policy computation on all Nodes.
+
+* It could enable scale-out of Controllers, with multiple Controllers working
+together on the NetworkPolicy computation, each one being responsible for a
+subset of NetworkPolicies (though at the moment Antrea supports only a single
+Controller instance).
+
+* Antrea Controller is the single source of NetworkPolicy computation. It is
+much easier to achieve consistency among Nodes and easier to debug the
+NetworkPolicy implementation.
+
+As described earlier, Antrea Controller leverages the Kubernetes apiserver
+library to build the API and communication channel to Agents.
+
+### Hybrid, NoEncap, NetworkPolicyOnly TrafficEncapMode
+
+Besides the default `Encap` mode, which always creates overlay tunnels among
+Nodes and encapsulates inter-Node Pod traffic, Antrea also supports other
+TrafficEncapModes including `Hybrid`, `NoEncap`, `NetworkPolicyOnly` modes. This
+section introduces these modes.
+
+* ***Hybrid*** When two Nodes are in two different subnets, Pod traffic between
+the two Nodes is encapsulated; when the two Nodes are in the same subnet, Pod
+traffic between them is not encapsulated, instead the traffic is routed from one
+Node to another. Antrea Agent adds routes on the Node to enable the routing
+within the same Node subnet. For every remote Node in the same subnet as the
+local Node, Agent adds a static route entry that uses the remote Node IP as the
+next hop of its Pod subnet.
+
+`Hybrid` mode requires the Node network to allow packets with Pod IPs to be sent
+out from the Nodes' NICs.
+
+* ***NoEncap*** Pod traffic is never encapsulated. Antrea just assumes the Node
+network can handle routing of Pod traffic across Nodes. Typically this is
+achieved by the Kubernetes Cloud Provider implementation which adds routes for
+Pod subnets to the Node network routers. Antrea Agent still creates static
+routes on each Node for remote Nodes in the same subnet, which is an optimization
+that routes Pod traffic directly to the destination Node without going through
+the extra hop of the Node network router. Antrea Agent also creates the iptables
+(MASQUERADE) rule for SNAT of Pod-to-external traffic.
+
+[Antrea supports GKE](../gke-installation.md) with `NoEncap` mode.
+
+* ***NetworkPolicyOnly*** Inter-Node Pod traffic is neither tunneled nor routed
+by Antrea. Antrea just implements NetworkPolicies for Pod traffic, but relies on
+another cloud CNI and cloud network to implement Pod IPAM and cross-Node traffic
+forwarding. Refer to the [NetworkPolicyOnly mode design document](policy-only.md)
+for more information.
+
+[Antrea for AKS
+Engine](https://github.com/Azure/aks-engine/blob/master/docs/topics/features.md#feat-antrea)
+and [Antrea EKS support](../eks-installation.md) work in `NetworkPolicyOnly`
+mode.
+
+## Features
+
+### Antrea Network Policy
+
+Besides Kubernetes NetworkPolicy, Antrea supports two extra types of
+Network Policies available as CRDs - Antrea Namespaced NetworkPolicy and
+ClusterNetworkPolicy. The former is scoped to a specific Namespace, while the
+latter is scoped to the whole cluster. These two types of Network Policies
+extend Kubernetes NetworkPolicy with advanced features including: policy
+priority, tiering, deny action, external entity, and policy statistics. For more
+information about Antrea network policies, refer to the [Antrea Network Policy document](../antrea-network-policy.md).
+
+Just like for Kubernetes NetworkPolicies, Antrea Controller transforms Antrea
+NetworkPolicies and ClusterNetworkPolicies to internal NetworkPolicy,
+AddressGroup and AppliedToGroup objects, and disseminates them to Antrea
+Agents. Antrea Agents create OVS flows to enforce the NetworkPolicies applied
+to the local Pods on their Nodes.
+
+### IPsec encryption
+
+Antrea supports encrypting Pod traffic across Linux Nodes with IPsec ESP. The
+IPsec implementation leverages [OVS
+IPsec](https://docs.openvswitch.org/en/latest/tutorials/ipsec/) and leverages
+[strongSwan](https://www.strongswan.org) as the IKE daemon. By default GRE
+tunnels are used but other tunnel types are also supported.
+
+To enable IPsec, an extra container -`antrea-ipsec` - must be added to the
+Antrea Agent DaemonSet, which runs the `ovs-monitor-ipsec` and strongSwan
+daemons. Antrea now supports only using pre-shared key (PSK) for IKE
+authentication, and the PSK string must be passed to Antrea Agent using an
+environment variable - `ANTREA_IPSEC_PSK`. The PSK string can be specified in
+the [Antrea IPsec deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-ipsec.yml), which creates
+a Kubernetes Secret to save the PSK value and populates it to the
+`ANTREA_IPSEC_PSK` environment variable of the Antrea Agent container.
+
+When IPsec is enabled, Antrea Agent will create a separate tunnel port on
+the OVS bridge for each remote Node, and write the PSK string and the remote
+Node IP address to two OVS interface options of the tunnel interface. Then
+`ovs-monitor-ipsec` can detect the tunnel and create IPsec Security Policies
+with PSK for the remote Node, and strongSwan can create the IPsec Security
+Associations based on the Security Policies. These additional tunnel ports are
+not used to send traffic to a remote Node - the tunnel traffic is still output
+to the default tunnel port (`antrea-tun0`) with OVS flow based tunneling.
+However, the traffic from a remote Node will be received from the Node's IPsec
+tunnel port.
+
+### Network flow visibility
+
+Antrea supports exporting network flow information with Kubernetes context
+using IPFIX. The exported network flows can be visualized using Elastic Stack
+and Kibana dashboards. For more information, refer to the [network flow
+visibility document](../network-flow-visibility.md).
+
+### Prometheus integration
+
+Antrea supports exporting metrics to Prometheus. Both Antrea Controller and
+Antrea Agent implement the `/metrics` API endpoint on their API server to expose
+various metrics generated by Antrea components or 3rd party components used by
+Antrea. Prometheus can be configured to collect metrics from the API endpoints.
+For more information, please refer to the [Prometheus integration document](../prometheus-integration.md).
+
+### Windows Node
+
+On a Windows Node, Antrea acts very much like it does on a Linux Node. Antrea
+Agent and OVS are still run on the Node, Windows Pods are still connected to the
+OVS bridge, and Pod networking is still mostly implemented with OVS flows. Even
+the OVS flows are mostly the same as those on a Linux Node. The main differences
+in the Antrea implementation for Window Node are: how Antrea Agent and OVS
+daemons are run and managed, how the OVS bridge is configured and Pod network
+interfaces are connected to the bridge, and how host network routing and SNAT
+are implemented. For more information about the Antrea Windows implementation,
+refer to the [Windows design document](windows-design.md).
+
+### Antrea Multi-cluster
+
+Antrea Multi-cluster implements Multi-cluster Service API, which allows users to
+create multi-cluster Services that can be accessed cross clusters in a
+ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy
+replication. Multi-cluster admins can define ClusterNetworkPolicies to be
+replicated across the entire ClusterSet, and enforced in all member clusters.
+To learn more information about the Antrea Multi-cluster architecture, please
+refer to the [Antrea Multi-cluster architecture document](../multicluster/architecture.md).
diff --git a/content/docs/v2.2.0-alpha.2/docs/design/ovs-pipeline.md b/content/docs/v2.2.0-alpha.2/docs/design/ovs-pipeline.md
new file mode 100644
index 00000000..5f3aecd2
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/design/ovs-pipeline.md
@@ -0,0 +1,1900 @@
+# Antrea OVS Pipeline
+
+## Introduction
+
+This document outlines the Open vSwitch (OVS) pipeline Antrea uses to implement its networking functionalities. The
+following assumptions are currently in place:
+
+- Antrea is deployed in encap mode, establishing an overlay network across all Nodes.
+- All the Nodes are Linux Nodes.
+- IPv6 is disabled.
+- Option `antreaProxy.proxyAll` (referred to as `proxyAll` later in this document) is enabled.
+- Two Alpha features `TrafficControl` and `L7NetworkPolicy` are enabled.
+- Default settings are maintained for other features and options.
+
+The document references version v1.15 of Antrea.
+
+## Terminology
+
+### Antrea / Kubernetes
+
+- *Node Route Controller*: the [Kubernetes controller](https://kubernetes.io/docs/concepts/architecture/controller/)
+ which is a part of antrea-agent and watches for updates to Nodes. When a Node is added, it updates the local
+ networking configurations (e.g. configure the tunnel to the new Node). When a Node is deleted, it performs the
+ necessary clean-ups.
+- *peer Node*: this is how we refer to other Nodes in the cluster, to which the local Node is connected through a Geneve,
+ VXLAN, GRE, or STT tunnel.
+- *Antrea-native NetworkPolicy*: Antrea ClusterNetworkPolicy and Antrea NetworkPolicy CRDs, as documented
+ [here](../antrea-network-policy.md).
+- *Service session affinity*: a Service attribute that selects the same backend Pods for connections from a particular
+ client. For a K8s Service, session affinity can be enabled by setting `service.spec.sessionAffinity` to `ClientIP`
+ (default is `None`). See [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/) for
+ more information about session affinity.
+
+### OpenFlow
+
+- *table-miss flow*: a "catch-all" flow in an OpenFlow table, which is used if no other flow is matched. If the table-miss
+ flow does not exist, by default packets unmatched by flows are dropped (discarded).
+- *action `conjunction`*: an efficient way in OVS to implement conjunctive matches, is a match for which multiple fields
+ are required to match conjunctively, each within a set of acceptable values. See [OVS
+ fields](http://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) for more information.
+- *action `normal`*: OpenFlow defines this action to submit a packet to "the traditional non-OpenFlow pipeline of
+ the switch". In other words, if a flow uses this action, the packets matched by the flow traverse the switch in
+ the same manner as they would if OpenFlow were not configured on the switch. Antrea uses this action to process
+ ARP packets as a regular learning L2 switch would.
+- *action `group`*: an action used to process forwarding decisions on multiple OVS ports. Examples include:
+ load-balancing, multicast, and active/standby. See [OVS group
+ action](https://docs.openvswitch.org/en/latest/ref/ovs-actions.7/#the-group-action) for more information.
+- *action `IN_PORT`*: an action to output packets to the port on which they were received. This is the only standard way
+ to output the packets to the input port.
+- *action `ct`*: an action to commit connections to the connection tracking module, which OVS can use to match
+ the state of a TCP, UDP, ICMP, etc., connection. See the [OVS Conntrack
+ tutorial](https://docs.openvswitch.org/en/latest/tutorials/ovs-conntrack/) for more information.
+- *reg mark*: a value stored in an OVS register conveying information for a packet across the pipeline. Explore all reg
+ marks in the pipeline in the [OVS Registers] section.
+- *ct mark*: a value stored in the field `ct_mark` of OVS conntrack, conveying information for a connection throughout
+ its entire lifecycle across the pipeline. Explore all values used in the pipeline in the [Ct Marks] section.
+- *ct label*: a value stored in the field `ct_label` of OVS conntrack, conveying information for a connection throughout
+ its entire lifecycle across the pipeline. Explore all values used in the pipeline in the [Ct Labels] section.
+- *ct zone*: a zone is to isolate connection tracking rules stored in the field `ct_zone` of OVS conntrack. It is
+ conceptually similar to the more generic Linux network namespace but is specific to conntrack and has less
+ overhead. Explore all the zones used in the pipeline in the [Ct Zones] section.
+
+### Misc
+
+- *dmac table*: a traditional L2 switch has a "dmac" table that maps the learned destination MAC address to the appropriate
+ egress port. It is often the same physical table as the "smac" table (which matches the source MAC address and
+ initiates MAC learning if the address is unknown).
+- *Global Virtual MAC*: a virtual MAC address that is used as the destination MAC for all tunneled traffic across all
+ Nodes. This simplifies networking by enabling all Nodes to use this MAC address instead of the actual MAC address of
+ the appropriate remote gateway. This allows each OVS to act as a "proxy" for the local gateway when receiving
+ tunneled traffic and directly take care of the packet forwarding. Currently, we use a hard-coded value of
+ `aa:bb:cc:dd:ee:ff`.
+- *Virtual Service IP*: a virtual IP address used as the source IP address for hairpin Service connections through the
+ Antrea gateway port. Currently, we use a hard-coded value of `169.254.0.253`.
+- *Virtual NodePort DNAT IP*: a virtual IP address used as a DNAT IP address for NodePort Service connections through
+ Antrea gateway port. Currently, we use a hard-coded value of `169.254.0.252`.
+
+## Dumping the Flows / Groups
+
+This guide includes a representative flow dump for every table in the pipeline, to illustrate the function of each
+table. If you have a cluster running Antrea, you can dump the flows or groups on a given Node as follows:
+
+```bash
+# Dump all flows.
+kubectl exec -n kube-system -c antrea-ovs -- ovs-ofctl dump-flows -O Openflow15 [--no-stats] [--names]
+
+# Dump all groups.
+kubectl exec -n kube-system -c antrea-ovs -- ovs-ofctl dump-groups -O Openflow15 [--names]
+```
+
+where `` is the name of the antrea-agent Pod running on that Node, and `` is the name
+of the bridge created by Antrea (`br-int` by default).
+
+You can also dump the flows for a specific table or group as follows:
+
+```bash
+# Dump flows of a table.
+kubectl exec -n kube-system -c antrea-ovs -- ovs-ofctl dump-flows table= -O Openflow15 [--no-stats] [--names]
+
+# Dump a group.
+kubectl exec -n kube-system -c antrea-ovs -- ovs-ofctl dump-groups -O Openflow15 [--names]
+```
+
+where `` is the name of a table in the pipeline, and `` is the ID of a group.
+
+## OVS Registers
+
+We use some OVS registers to carry information throughout the pipeline. To enhance usability, we assign friendly names
+to the registers we use.
+
+| Register | Field Range | Field Name | Reg Mark Value | Reg Mark Name | Description |
+|---------------|-------------|---------------------------------|----------------|---------------------------------|------------------------------------------------------------------------------------------------------|
+| NXM_NX_REG0 | bits 0-3 | PktSourceField | 0x1 | FromTunnelRegMark | Packet source is tunnel port. |
+| | | | 0x2 | FromGatewayRegMark | Packet source is the local Antrea gateway port. |
+| | | | 0x3 | FromPodRegMark | Packet source is local Pod port. |
+| | | | 0x4 | FromUplinkRegMark | Packet source is uplink port. |
+| | | | 0x5 | FromBridgeRegMark | Packet source is local bridge port. |
+| | | | 0x6 | FromTCReturnRegMark | Packet source is TrafficControl return port. |
+| | bits 4-7 | PktDestinationField | 0x1 | ToTunnelRegMark | Packet destination is tunnel port. |
+| | | | 0x2 | ToGatewayRegMark | Packet destination is the local Antrea gateway port. |
+| | | | 0x3 | ToLocalRegMark | Packet destination is local Pod port. |
+| | | | 0x4 | ToUplinkRegMark | Packet destination is uplink port. |
+| | | | 0x5 | ToBridgeRegMark | Packet destination is local bridge port. |
+| | bit 9 | | 0b0 | NotRewriteMACRegMark | Packet's source/destination MAC address does not need to be rewritten. |
+| | | | 0b1 | RewriteMACRegMark | Packet's source/destination MAC address needs to be rewritten. |
+| | bit 10 | | 0b1 | APDenyRegMark | Packet denied (Drop/Reject) by Antrea NetworkPolicy. |
+| | bits 11-12 | APDispositionField | 0b00 | DispositionAllowRegMark | Indicates Antrea NetworkPolicy disposition: allow. |
+| | | | 0b01 | DispositionDropRegMark | Indicates Antrea NetworkPolicy disposition: drop. |
+| | | | 0b11 | DispositionPassRegMark | Indicates Antrea NetworkPolicy disposition: pass. |
+| | bit 13 | | 0b1 | GeneratedRejectPacketOutRegMark | Indicates packet is a generated reject response packet-out. |
+| | bit 14 | | 0b1 | SvcNoEpRegMark | Indicates packet towards a Service without Endpoint. |
+| | bit 19 | | 0b1 | RemoteSNATRegMark | Indicates packet needs SNAT on a remote Node. |
+| | bit 22 | | 0b1 | L7NPRedirectRegMark | Indicates L7 Antrea NetworkPolicy disposition of redirect. |
+| | bits 21-22 | OutputRegField | 0b01 | OutputToOFPortRegMark | Output packet to an OVS port. |
+| | | | 0b10 | OutputToControllerRegMark | Send packet to Antrea Agent. |
+| | bits 25-32 | PacketInOperationField | | | Field to store NetworkPolicy packetIn operation. |
+| NXM_NX_REG1 | bits 0-31 | TargetOFPortField | | | Egress OVS port of packet. |
+| NXM_NX_REG2 | bits 0-31 | SwapField | | | Swap values in flow fields in OpenFlow actions. |
+| | bits 0-7 | PacketInTableField | | | OVS table where it was decided to send packets to the controller (Antrea Agent). |
+| NXM_NX_REG3 | bits 0-31 | EndpointIPField | | | Field to store IPv4 address of the selected Service Endpoint. |
+| | | APConjIDField | | | Field to store Conjunction ID for Antrea Policy. |
+| NXM_NX_REG4 | bits 0-15 | EndpointPortField | | | Field store TCP/UDP/SCTP port of a Service's selected Endpoint. |
+| | bits 16-18 | ServiceEPStateField | 0b001 | EpToSelectRegMark | Packet needs to do Service Endpoint selection. |
+| | bits 16-18 | ServiceEPStateField | 0b010 | EpSelectedRegMark | Packet has done Service Endpoint selection. |
+| | bits 16-18 | ServiceEPStateField | 0b011 | EpToLearnRegMark | Packet has done Service Endpoint selection and the selected Endpoint needs to be cached. |
+| | bits 0-18 | EpUnionField | | | The union value of EndpointPortField and ServiceEPStateField. |
+| | bit 19 | | 0b1 | ToNodePortAddressRegMark | Packet is destined for a Service of type NodePort. |
+| | bit 20 | | 0b1 | AntreaFlexibleIPAMRegMark | Packet is from local Antrea IPAM Pod. |
+| | bit 20 | | 0b0 | NotAntreaFlexibleIPAMRegMark | Packet is not from local Antrea IPAM Pod. |
+| | bit 21 | | 0b1 | ToExternalAddressRegMark | Packet is destined for a Service's external IP. |
+| | bits 22-23 | TrafficControlActionField | 0b01 | TrafficControlMirrorRegMark | Indicates packet needs to be mirrored (used by TrafficControl). |
+| | | | 0b10 | TrafficControlRedirectRegMark | Indicates packet needs to be redirected (used by TrafficControl). |
+| | bit 24 | | 0b1 | NestedServiceRegMark | Packet is destined for a Service using other Services as Endpoints. |
+| | bit 25 | | 0b1 | DSRServiceRegMark | Packet is destined for a Service working in DSR mode. |
+| | | | 0b0 | NotDSRServiceRegMark | Packet is destined for a Service working in non-DSR mode. |
+| | bit 26 | | 0b1 | RemoteEndpointRegMark | Packet is destined for a Service selecting a remote non-hostNetwork Endpoint. |
+| | bit 27 | | 0b1 | FromExternalRegMark | Packet is from Antrea gateway, but its source IP is not the gateway IP. |
+| | bit 28 | | 0b1 | FromLocalRegMark | Packet is from a local Pod or the Node. |
+| NXM_NX_REG5 | bits 0-31 | TFEgressConjIDField | | | Egress conjunction ID hit by TraceFlow packet. |
+| NXM_NX_REG6 | bits 0-31 | TFIngressConjIDField | | | Ingress conjunction ID hit by TraceFlow packet. |
+| NXM_NX_REG7 | bits 0-31 | ServiceGroupIDField | | | GroupID corresponding to the Service. |
+| NXM_NX_REG8 | bits 0-11 | VLANIDField | | | VLAN ID. |
+| | bits 12-15 | CtZoneTypeField | 0b0001 | IPCtZoneTypeRegMark | Ct zone type is IPv4. |
+| | | | 0b0011 | IPv6CtZoneTypeRegMark | Ct zone type is IPv6. |
+| | bits 0-15 | CtZoneField | | | Ct zone ID is a combination of VLANIDField and CtZoneTypeField. |
+| NXM_NX_REG9 | bits 0-31 | TrafficControlTargetOFPortField | | | Field to cache the OVS port to output packets to be mirrored or redirected (used by TrafficControl). |
+| NXM_NX_XXREG3 | bits 0-127 | EndpointIP6Field | | | Field to store IPv6 address of the selected Service Endpoint. |
+
+Note that reg marks that have overlapped bits will not be used at the same time, such as `SwapField` and `PacketInTableField`.
+
+## OVS Ct Mark
+
+We use some bits of the `ct_mark` field of OVS conntrack to carry information throughout the pipeline. To enhance
+usability, we assign friendly names to the bits we use.
+
+| Field Range | Field Name | Ct Mark Value | Ct Mark Name | Description |
+|-------------|-----------------------|---------------|--------------------|-----------------------------------------------------------------|
+| bits 0-3 | ConnSourceCTMarkField | 0b0010 | FromGatewayCTMark | Connection source is the Antrea gateway port. |
+| | | 0b0101 | FromBridgeCTMark | Connection source is the local bridge port. |
+| bit 4 | | 0b1 | ServiceCTMark | Connection is for Service. |
+| | | 0b0 | NotServiceCTMark | Connection is not for Service. |
+| bit 5 | | 0b1 | ConnSNATCTMark | SNAT'd connection for Service. |
+| bit 6 | | 0b1 | HairpinCTMark | Hair-pin connection. |
+| bit 7 | | 0b1 | L7NPRedirectCTMark | Connection should be redirected to an application-aware engine. |
+
+## OVS Ct Label
+
+We use some bits of the `ct_label` field of OVS conntrack to carry information throughout the pipeline. To enhance
+usability, we assign friendly names to the bits we use.
+
+| Field Range | Field Name | Description |
+|-------------|-----------------------|------------------------------------|
+| bits 0-31 | IngressRuleCTLabel | Ingress rule ID. |
+| bits 32-63 | EgressRuleCTLabel | Egress rule ID. |
+| bits 64-75 | L7NPRuleVlanIDCTLabel | VLAN ID for L7 NetworkPolicy rule. |
+
+## OVS Ct Zone
+
+We use some OVS conntrack zones to isolate connection tracking rules. To enhance usability, we assign friendly names to
+the ct zones.
+
+| Zone ID | Zone Name | Description |
+|---------|--------------|----------------------------------------------------|
+| 65520 | CtZone | Tracking IPv4 connections that don't require SNAT. |
+| 65521 | SNATCtZone | Tracking IPv4 connections that require SNAT. |
+
+## Kubernetes NetworkPolicy Implementation
+
+Several tables of the pipeline are dedicated to [Kubernetes
+NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) implementation (tables
+[EgressRule], [EgressDefaultRule], [IngressRule], and [IngressDefaultRule]).
+
+Throughout this document, the following K8s NetworkPolicy example is used to demonstrate how simple ingress and egress
+policy rules are mapped to OVS flows.
+
+This K8s NetworkPolicy is applied to Pods with the label `app: web` in the `default` Namespace. For these Pods, only TCP
+traffic on port 80 from Pods with the label `app: client` and to Pods with the label `app: db` is allowed. Because
+Antrea will only install OVS flows for this K8s NetworkPolicy on Nodes that have Pods selected by the policy, we have
+scheduled an `app: web` Pod on the current Node from which the sample flows in this document are dumped. The Pod has
+been assigned an IP address `10.10.0.19` from the Antrea CNI, so you will see the IP address shown in the associated
+flows.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: web-app-db-network-policy
+ namespace: default
+spec:
+ podSelector:
+ matchLabels:
+ app: web
+ policyTypes:
+ - Ingress
+ - Egress
+ ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ app: client
+ ports:
+ - protocol: TCP
+ port: 80
+ egress:
+ - to:
+ - podSelector:
+ matchLabels:
+ app: db
+ ports:
+ - protocol: TCP
+ port: 3306
+```
+
+## Kubernetes Service Implementation
+
+Like K8s NetworkPolicy, several tables of the pipeline are dedicated to [Kubernetes
+Service](https://kubernetes.io/docs/concepts/services-networking/service/) implementation (tables [NodePortMark],
+[SessionAffinity], [ServiceLB], and [EndpointDNAT]).
+
+By enabling `proxyAll`, ClusterIP, NodePort, LoadBalancer, and ExternalIP are all handled by Antrea Proxy. Otherwise,
+only in-cluster ClusterIP is handled. In this document, we use the sample K8s Services below. These Services select Pods
+with the label `app: web` as Endpoints.
+
+### ClusterIP without Endpoint
+
+A sample Service with `clusterIP` set to `10.101.255.29` does not have any associated Endpoint.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-clusterip-no-ep
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ clusterIP: 10.101.255.29
+```
+
+### ClusterIP
+
+A sample ClusterIP Service with `clusterIP` set to `10.105.31.235`.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-clusterip
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ clusterIP: 10.105.31.235
+```
+
+### NodePort
+
+A sample NodePort Service with `nodePort` set to `30004`.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-nodeport
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ nodePort: 30004
+ type: NodePort
+```
+
+### LoadBalancer
+
+A sample LoadBalancer Service with ingress IP `192.168.77.150` assigned by an ingress controller.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-loadbalancer
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ type: LoadBalancer
+status:
+ loadBalancer:
+ ingress:
+ - ip: 192.168.77.150
+```
+
+### Service with ExternalIP
+
+A sample Service with external IP `192.168.77.200`.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-service-externalip
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ externalIPs:
+ - 192.168.77.200
+```
+
+### Service with Session Affinity
+
+A sample Service configured with session affinity.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-service-session-affinity
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ clusterIP: 10.96.76.15
+ sessionAffinity: ClientIP
+ sessionAffinityConfig:
+ clientIP:
+ timeoutSeconds: 300
+```
+
+### Service with ExternalTrafficPolicy Local
+
+A sample Service configured `externalTrafficPolicy` to `Local`. Only `externalTrafficPolicy` of NodePort/LoadBalancer
+Service can be configured with `Local`.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: sample-service-etp-local
+spec:
+ selector:
+ app: web
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ type: LoadBalancer
+ externalTrafficPolicy: Local
+status:
+ loadBalancer:
+ ingress:
+ - ip: 192.168.77.151
+```
+
+## Antrea-native NetworkPolicy Implementation
+
+In addition to the tables created for K8s NetworkPolicy, Antrea creates additional dedicated tables to support
+[Antrea-native NetworkPolicy](../antrea-network-policy.md) (tables [AntreaPolicyEgressRule] and
+[AntreaPolicyIngressRule]).
+
+Consider the following Antrea ClusterNetworkPolicy (ACNP) in the Application Tier as an example for the remainder of
+this document.
+
+This ACNP is applied to all Pods with the label `app: web` in all Namespaces. For these Pods, only TCP traffic on port
+80 from the Pods with the label `app: client` and to the Pods with the label `app: db` is allowed. Similar to K8s
+NetworkPolicy, Antrea will only install OVS flows for this policy on Nodes that have Pods selected by the policy.
+
+This policy has very similar rules as the K8s NetworkPolicy example shown previously. This is intentional to simplify
+this document and to allow easier comparison between the flows generated for both types of policies. Additionally, we
+should emphasize that this policy applies to Pods across all Namespaces, while a K8s NetworkPolicy is always scoped to
+a specific Namespace (in the case of our example, the default Namespace).
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: web-app-db-network-policy
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ ports:
+ - protocol: TCP
+ port: 80
+ name: AllowFromClient
+ - action: Drop
+ egress:
+ - action: Allow
+ to:
+ - podSelector:
+ matchLabels:
+ app: db
+ ports:
+ - protocol: TCP
+ port: 3306
+ name: AllowToDB
+ - action: Drop
+```
+
+## Antrea-native L7 NetworkPolicy Implementation
+
+In addition to layer 3 and layer 4 policies mentioned above, [Antrea-native Layer 7
+NetworkPolicy](../antrea-l7-network-policy.md) is also supported in Antrea. The main difference is that Antrea-native L7
+NetworkPolicy uses layer 7 protocol to filter traffic, not layer 3 or layer 4 protocol.
+
+Consider the following Antrea-native L7 NetworkPolicy in the Application Tier as an example for the remainder of this
+document.
+
+This ACNP is applied to all Pods with the label `app: web` in all Namespaces. It allows only HTTP ingress traffic on
+port 8080 from Pods with the label `app: client`, limited to the `GET` method and `/api/v2/*` path. Any other HTTP
+ingress traffic on port 8080 from Pods with the label `app: client` will be dropped.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: ingress-allow-http-request-to-api-v2
+spec:
+ priority: 4
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: AllowFromClientL7
+ action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ ports:
+ - protocol: TCP
+ port: 8080
+ l7Protocols:
+ - http:
+ path: "/api/v2/*"
+ method: "GET"
+```
+
+## TrafficControl Implementation
+
+[TrafficControl](../traffic-control.md) is a CRD API that manages and manipulates the transmission of Pod traffic.
+Antrea creates a dedicated table [TrafficControl] to implement feature `TrafficControl`. We will use the following
+TrafficControls as examples for the remainder of this document.
+
+### TrafficControl for Packet Redirecting
+
+This is a TrafficControl applied to Pods with the label `app: web`. For these Pods, both ingress and egress traffic will
+be redirected to port `antrea-tc-tap0`, and returned through port `antrea-tc-tap1`.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: redirect-web-to-local
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: web
+ direction: Both
+ action: Redirect
+ targetPort:
+ ovsInternal:
+ name: antrea-tc-tap0
+ returnPort:
+ ovsInternal:
+ name: antrea-tc-tap1
+```
+
+### TrafficControl for Packet Mirroring
+
+This is a TrafficControl applied to Pods with the label `app: db`. For these Pods, both ingress and egress will be
+mirrored (duplicated) to port `antrea-tc-tap2`.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: mirror-db-to-local
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: db
+ direction: Both
+ action: Mirror
+ targetPort:
+ ovsInternal:
+ name: antrea-tc-tap2
+```
+
+## Egress Implementation
+
+Table [EgressMark] is dedicated to the implementation of feature `Egress`.
+
+Consider the following Egresses as examples for the remainder of this document.
+
+### Egress Applied to Web Pods
+
+This is an Egress applied to Pods with the label `app: web`. For these Pods, all egress traffic (traffic leaving the
+cluster) will be SNAT'd on the Node `k8s-node-control-plane` using Egress IP `192.168.77.112`. In this context,
+`k8s-node-control-plane` is known as the "Egress Node" for this Egress resource. Note that the flows presented in the
+rest of this document were dumped on Node `k8s-node-control-plane`. Egress flows are different on the "source Node"
+(Node running a workload Pod to which the Egress resource is applied) and on the "Egress Node" (Node enforcing the
+SNAT policy).
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-web
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: web
+ egressIP: 192.168.77.112
+status:
+ egressNode: k8s-node-control-plane
+```
+
+### Egress Applied to Client Pods
+
+This is an Egress applied to Pods with the label `app: client`. For these Pods, all egress traffic will be SNAT'd on the
+Node `k8s-node-worker-1` using Egress IP `192.168.77.113`.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-client
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: client
+ egressIP: 192.168.77.113
+status:
+ egressNode: k8s-node-worker-1
+```
+
+## OVS Tables
+
+![OVS pipeline](../assets/ovs-pipeline.svg)
+
+### PipelineRootClassifier
+
+This table serves as the primary entry point in the pipeline, forwarding packets to different tables based on their
+respective protocols.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=PipelineRootClassifier, priority=200,arp actions=goto_table:ARPSpoofGuard
+2. table=PipelineRootClassifier, priority=200,ip actions=goto_table:Classifier
+3. table=PipelineRootClassifier, priority=0 actions=drop
+```
+
+Flow 1 forwards ARP packets to table [ARPSpoofGuard].
+
+Flow 2 forwards IP packets to table [Classifier].
+
+Flow 3 is the table-miss flow to drop other unsupported protocols, not normally used.
+
+### ARPSpoofGuard
+
+This table is designed to drop ARP [spoofing](https://en.wikipedia.org/wiki/Spoofing_attack) packets from local Pods or
+the local Antrea gateway. We ensure that the advertised IP and MAC addresses are correct, meaning they match the values
+configured on the interface when Antrea sets up networking for a local Pod or the local Antrea gateway.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ARPSpoofGuard, priority=200,arp,in_port="antrea-gw0",arp_spa=10.10.0.1,arp_sha=ba:5e:d1:55:aa:c0 actions=goto_table:ARPResponder
+2. table=ARPSpoofGuard, priority=200,arp,in_port="client-6-3353ef",arp_spa=10.10.0.26,arp_sha=5e:b5:e3:a6:90:b7 actions=goto_table:ARPResponder
+3. table=ARPSpoofGuard, priority=200,arp,in_port="web-7975-274540",arp_spa=10.10.0.24,arp_sha=fa:b7:53:74:21:a6 actions=goto_table:ARPResponder
+4. table=ARPSpoofGuard, priority=200,arp,in_port="db-755c6-5080e3",arp_spa=10.10.0.25,arp_sha=36:48:21:a2:9d:b4 actions=goto_table:ARPResponder
+5. table=ARPSpoofGuard, priority=0 actions=drop
+```
+
+Flow 1 matches legitimate ARP packets from the local Antrea gateway.
+
+Flows 2-4 match legitimate ARP packets from local Pods.
+
+Flow 5 is the table-miss flow to drop ARP spoofing packets, which are not matched by flows 1-4.
+
+### ARPResponder
+
+The purpose of this table is to handle ARP requests from the local Antrea gateway or local Pods, addressing specific cases:
+
+1. Responding to ARP requests from the local Antrea gateway seeking the MAC address of a remote Antrea gateway located
+ on a different Node. This ensures that the local Node can reach any remote Pods.
+2. Ensuring the normal layer 2 (L2) learning among local Pods and the local Antrea gateway.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ARPResponder, priority=200,arp,arp_tpa=10.10.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:aa:bb:cc:dd:ee:ff->eth_src,set_field:2->arp_op,move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],set_field:aa:bb:cc:dd:ee:ff->arp_sha,move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],set_field:10.10.1.1->arp_spa,IN_PORT
+2. table=ARPResponder, priority=190,arp actions=NORMAL
+3. table=ARPResponder, priority=0 actions=drop
+```
+
+Flow 1 is designed for case 1, matching ARP request packets for the MAC address of a remote Antrea gateway with IP address
+`10.10.1.1`. It programs an ARP reply packet and sends it back to the port where the request packet was received. Note
+that both the source hardware address and the source MAC address in the ARP reply packet are set to the *Global Virtual
+MAC* `aa:bb:cc:dd:ee:ff`, not the actual MAC address of the remote Antrea gateway. This ensures that once the traffic is
+received by the remote OVS bridge, it can be directly forwarded to the appropriate Pod without actually going through
+the local Antrea gateway. The *Global Virtual MAC* is used as the destination MAC address for all the traffic being
+tunneled or routed.
+
+This flow serves as the "ARP responder" for the peer Node whose local Pod subnet is `10.10.1.0/24`. If we were to look
+at the routing table for the local Node, we would find the following "onlink" route:
+
+```text
+10.10.1.0/24 via 10.10.1.1 dev antrea-gw0 onlink
+```
+
+A similar route is installed on the local Antrea gateway (antrea-gw0) interface every time the Antrea *Node Route Controller*
+is notified that a new Node has joined the cluster. The route must be marked as "onlink" since the kernel does not have
+a route to the peer gateway `10.10.1.1`. We "trick" the kernel into believing that `10.10.1.1` is directly connected to
+the local Node, even though it is on the other side of the tunnel.
+
+Flow 2 is designed for case 2, ensuring that OVS handles the remainder of ARP traffic as a regular L2 learning switch
+(using the `normal` action). In particular, this takes care of forwarding ARP requests and replies among local Pods.
+
+Flow 3 is the table-miss flow, which should never be used since ARP packets will be matched by either flow 1 or 2.
+
+### Classifier
+
+This table is designed to determine the "category" of IP packets by matching on their ingress port. It addresses
+specific cases:
+
+1. Packets originating from the local Node through the local Antrea gateway port, requiring IP spoof legitimacy
+ verification.
+2. Packets originating from the external network through the Antrea gateway port.
+3. Packets received through an overlay tunnel.
+4. Packets received through a return port defined in a user-provided TrafficControl CR (for feature `TrafficControl`).
+5. Packets returned from an application-aware engine through a specific port (for feature `L7NetworkPolicy`).
+6. Packets originating from local Pods, requiring IP spoof legitimacy verification.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=Classifier, priority=210,ip,in_port="antrea-gw0",nw_src=10.10.0.1 actions=set_field:0x2/0xf->reg0,set_field:0x10000000/0x10000000->reg4,goto_table:SpoofGuard
+2. table=Classifier, priority=200,in_port="antrea-gw0" actions=set_field:0x2/0xf->reg0,set_field:0x8000000/0x8000000->reg4,goto_table:SpoofGuard
+3. table=Classifier, priority=200,in_port="antrea-tun0" actions=set_field:0x1/0xf->reg0,set_field:0x200/0x200->reg0,goto_table:UnSNAT
+4. table=Classifier, priority=200,in_port="antrea-tc-tap2" actions=set_field:0x6/0xf->reg0,goto_table:L3Forwarding
+5. table=Classifier, priority=200,in_port="antrea-l7-tap1",vlan_tci=0x1000/0x1000 actions=pop_vlan,set_field:0x6/0xf->reg0,goto_table:L3Forwarding
+6. table=Classifier, priority=190,in_port="client-6-3353ef" actions=set_field:0x3/0xf->reg0,set_field:0x10000000/0x10000000->reg4,goto_table:SpoofGuard
+7. table=Classifier, priority=190,in_port="web-7975-274540" actions=set_field:0x3/0xf->reg0,set_field:0x10000000/0x10000000->reg4,goto_table:SpoofGuard
+8. table=Classifier, priority=190,in_port="db-755c6-5080e3" actions=set_field:0x3/0xf->reg0,set_field:0x10000000/0x10000000->reg4,goto_table:SpoofGuard
+9. table=Classifier, priority=0 actions=drop
+```
+
+Flow 1 is designed for case 1, matching the source IP address `10.10.0.1` to ensure that the packets are originating from
+the local Antrea gateway. The following reg marks are loaded:
+
+- `FromGatewayRegMark`, indicating that the packets are received on the local Antrea gateway port, which will be
+ consumed in tables [L3Forwarding], [L3DecTTL], [SNATMark] and [SNAT].
+- `FromLocalRegMark`, indicating that the packets are from the local Node, which will be consumed in table [ServiceLB].
+
+Flow 2 is designed for case 2, matching packets originating from the external network through the Antrea gateway port
+and forwarding them to table [SpoofGuard]. Since packets originating from the local Antrea gateway are matched by flow
+1, flow 2 can only match packets originating from the external network. The following reg marks are loaded:
+
+- `FromGatewayRegMark`, the same as flow 1.
+- `FromExternalRegMark`, indicating that the packets are from the external network, not the local Node.
+
+Flow 3 is for case 3, matching packets through an overlay tunnel (i.e., from another Node) and forwarding them to table
+[UnSNAT]. This approach is based on the understanding that these packets originate from remote Nodes, potentially
+bearing varying source IP addresses. These packets undergo legitimacy verification before being tunneled. As a consequence,
+packets from the tunnel should be seamlessly forwarded to table [UnSNAT]. The following reg marks are loaded:
+
+- `FromTunnelRegMark`, indicating that the packets are received on a tunnel, consumed in table [L3Forwarding].
+- `RewriteMACRegMark`, indicating that the source and destination MAC addresses of the packets should be rewritten,
+ and consumed in table [L3Forwarding].
+
+Flow 4 is for case 4, matching packets from a TrafficControl return port and forwarding them to table [L3Forwarding]
+to decide the egress port. It's important to note that a forwarding decision for these packets was already made before
+redirecting them to the TrafficControl target port in table [Output], and at this point, the source and destination MAC
+addresses of these packets have already been set to the correct values. The only purpose of forwarding the packets to
+table [L3Forwarding] is to load the tunnel destination IP for packets destined for remote Nodes. This ensures that the
+returned packets destined for remote Nodes are forwarded through the tunnel. `FromTCReturnRegMark`, which will be used
+in table [TrafficControl], is loaded to mark the packet source.
+
+Flow 5 is for case 5, matching packets returned back from an application-aware engine through a specific port, stripping
+the VLAN ID used by the application-aware engine, and forwarding them to table [L3Forwarding] to decide the egress port.
+Like flow 4, the purpose of forwarding the packets to table [L3Forwarding] is to load the tunnel destination IP for
+packets destined for remote Nodes, and `FromTCReturnRegMark` is also loaded.
+
+Flows 6-8 are for case 6, matching packets from local Pods and forwarding them to table [SpoofGuard] to do legitimacy
+verification. The following reg marks are loaded:
+
+- `FromPodRegMark`, indicating that the packets are received on the ports connected to the local Pods, consumed in
+ tables [L3Forwarding] and [SNATMark].
+- `FromLocalRegMark`, indicating that the packets are from the local Pods, consumed in table [ServiceLB].
+
+Flow 9 is the table-miss flow to drop packets that are not matched by flows 1-8.
+
+### SpoofGuard
+
+This table is crafted to prevent IP [spoofing](https://en.wikipedia.org/wiki/Spoofing_attack) from local Pods. It
+addresses specific cases:
+
+1. Allowing all packets from the local Antrea gateway. We do not perform checks for this interface as we need to accept
+ external traffic with a source IP address that does not match the gateway IP.
+2. Ensuring that the source IP and MAC addresses are correct, i.e., matching the values configured on the interface when
+ Antrea sets up networking for a Pod.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=SpoofGuard, priority=200,ip,in_port="antrea-gw0" actions=goto_table:UnSNAT
+2. table=SpoofGuard, priority=200,ip,in_port="client-6-3353ef",dl_src=5e:b5:e3:a6:90:b7,nw_src=10.10.0.26 actions=goto_table:UnSNAT
+3. table=SpoofGuard, priority=200,ip,in_port="web-7975-274540",dl_src=fa:b7:53:74:21:a6,nw_src=10.10.0.24 actions=goto_table:UnSNAT
+4. table=SpoofGuard, priority=200,ip,in_port="db-755c6-5080e3",dl_src=36:48:21:a2:9d:b4,nw_src=10.10.0.25 actions=goto_table:UnSNAT
+5. table=SpoofGuard, priority=0 actions=drop
+```
+
+Flow 1 is for case 1, matching packets received on the local Antrea gateway port without checking the source IP and MAC
+addresses. There are some cases where the source IP of the packets through the local Antrea gateway port is not the local
+Antrea gateway IP address:
+
+- When Antrea is deployed with kube-proxy, and the feature `AntreaProxy` is not enabled, packets from local Pods destined
+ for Services will first go through the gateway port, get load-balanced by the kube-proxy data path (undergoes DNAT)
+ then re-enter the OVS pipeline through the gateway port (through an "onlink" route, installed by Antrea, directing the
+ DNAT'd packets to the gateway port), resulting in the source IP being that of a local Pod.
+- When Antrea is deployed without kube-proxy, and both the feature `AntreaProxy` and option `proxyAll` are enabled,
+ packets from the external network destined for Services will be routed to OVS through the gateway port without
+ masquerading source IP.
+- When Antrea is deployed with kube-proxy, packets from the external network destined for Services whose
+ `externalTrafficPolicy` is set to `Local` will get load-balanced by the kube-proxy data path (undergoes DNAT with a
+ local Endpoint selected by the kube-proxy) and then enter the OVS pipeline through the gateway (through a "onlink"
+ route, installed by Antrea, directing the DNAT'd packets to the gateway port) without masquerading source IP.
+
+Flows 2-4 are for case 2, matching legitimate IP packets from local Pods.
+
+Flow 5 is the table-miss flow to drop IP spoofing packets.
+
+### UnSNAT
+
+This table is used to undo SNAT on reply packets by invoking action `ct` on them. The packets are from SNAT'd Service
+connections that have been committed to `SNATCtZone` in table [SNAT]. After invoking action `ct`, the packets will be
+in a "tracked" state, restoring all [connection tracking
+fields](https://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) (such as `ct_state`, `ct_mark`, `ct_label`, etc.)
+to their original values. The packets with a "tracked" state are then forwarded to table [ConntrackZone].
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=UnSNAT, priority=200,ip,nw_dst=169.254.0.253 actions=ct(table=ConntrackZone,zone=65521,nat)
+2. table=UnSNAT, priority=200,ip,nw_dst=10.10.0.1 actions=ct(table=ConntrackZone,zone=65521,nat)
+3. table=UnSNAT, priority=0 actions=goto_table:ConntrackZone
+```
+
+Flow 1 matches reply packets for Service connections which were SNAT'd with the *Virtual Service IP* `169.254.0.253`
+and invokes action `ct` on them.
+
+Flow 2 matches packets for Service connections which were SNAT'd with the local Antrea gateway IP `10.10.0.1` and
+invokes action `ct` on them. This flow also matches request packets destined for the local Antrea gateway IP from
+local Pods by accident. However, this is harmless since such connections will never be committed to `SNATCtZone`, and
+therefore, connection tracking fields for the packets are unset.
+
+Flow 3 is the table-miss flow.
+
+For reply packets from SNAT'd connections, whose destination IP is the translated SNAT IP, after invoking action `ct`,
+the destination IP of the packets will be restored to the original IP before SNAT, stored in the connection tracking
+field `ct_nw_dst`.
+
+### ConntrackZone
+
+The main purpose of this table is to invoke action `ct` on packets from all connections. After invoking `ct` action,
+packets will be in a "tracked" state, restoring all connection tracking fields to their appropriate values. When invoking
+action `ct` with `CtZone` to the packets that have a "tracked" state associated with `SNATCtZone`, then the "tracked"
+state associated with `SNATCtZone` will be inaccessible. This transition occurs because the "tracked" state shifts to
+another state associated with `CtZone`. A ct zone is similar in spirit to the more generic Linux network namespaces,
+uniquely containing a "tracked" state within each ct zone.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ConntrackZone, priority=200,ip actions=ct(table=ConntrackState,zone=65520,nat)
+2. table=ConntrackZone, priority=0 actions=goto_table:ConntrackState
+```
+
+Flow 1 invokes `ct` action on packets from all connections, and the packets are then forwarded to table [ConntrackState]
+with the "tracked" state associated with `CtZone`. Note that for packets in an established Service (DNATed) connection,
+not the first packet of a Service connection, DNAT or un-DNAT is performed on them before they are forwarded.
+
+Flow 2 is the table-miss flow that should remain unused.
+
+### ConntrackState
+
+This table handles packets from the connections that have a "tracked" state associated with `CtZone`. It addresses
+specific cases:
+
+1. Dropping invalid packets reported by conntrack.
+2. Forwarding tracked packets from all connections to table [AntreaPolicyEgressRule] directly, bypassing the tables
+ like [PreRoutingClassifier], [NodePortMark], [SessionAffinity], [ServiceLB], and [EndpointDNAT] for Service Endpoint
+ selection.
+3. Forwarding packets from new connections to table [PreRoutingClassifier] to start Service Endpoint selection since
+ Service connections are not identified at this stage.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ConntrackState, priority=200,ct_state=+inv+trk,ip actions=drop
+2. table=ConntrackState, priority=190,ct_state=-new+trk,ct_mark=0/0x10,ip actions=goto_table:AntreaPolicyEgressRule
+3. table=ConntrackState, priority=190,ct_state=-new+trk,ct_mark=0x10/0x10,ip actions=set_field:0x200/0x200->reg0,goto_table:AntreaPolicyEgressRule
+4. table=ConntrackState, priority=0 actions=goto_table:PreRoutingClassifier
+```
+
+Flow 1 is for case 1, dropping invalid packets.
+
+Flow 2 is for case 2, matching packets from non-Service connections with `NotServiceCTMark` and forwarding them to
+table [AntreaPolicyEgressRule] directly, bypassing the tables for Service Endpoint selection.
+
+Flow 3 is also for case 2, matching packets from Service connections with `ServiceCTMark` loaded in table
+[EndpointDNAT] and forwarding them to table [AntreaPolicyEgressRule], bypassing the tables for Service Endpoint
+selection. `RewriteMACRegMark`, which is used in table [L3Forwarding], is loaded in this flow, indicating that the
+source and destination MAC addresses of the packets should be rewritten.
+
+Flow 4 is the table-miss flow for case 3, matching packets from all new connections and forwarding them to table
+[PreRoutingClassifier] to start the processing of Service Endpoint selection.
+
+### PreRoutingClassifier
+
+This table handles the first packet from uncommitted Service connections before Service Endpoint selection. It
+sequentially resubmits the packets to tables [NodePortMark] and [SessionAffinity] to do some pre-processing, including
+the loading of specific reg marks. Subsequently, it forwards the packets to table [ServiceLB] to perform Service Endpoint
+selection.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=PreRoutingClassifier, priority=200,ip actions=resubmit(,NodePortMark),resubmit(,SessionAffinity),resubmit(,ServiceLB)
+2. table=PreRoutingClassifier, priority=0 actions=goto_table:NodePortMark
+```
+
+Flow 1 sequentially resubmits packets to tables [NodePortMark], [SessionAffinity], and [ServiceLB]. Note that packets
+are ultimately forwarded to table [ServiceLB]. In tables [NodePortMark] and [SessionAffinity], only reg marks are loaded.
+
+Flow 2 is the table-miss flow that should remain unused.
+
+### NodePortMark
+
+This table is designed to potentially mark packets destined for NodePort Services. It is only created when `proxyAll` is
+enabled.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=NodePortMark, priority=200,ip,nw_dst=192.168.77.102 actions=set_field:0x80000/0x80000->reg4
+2. table=NodePortMark, priority=200,ip,nw_dst=169.254.0.252 actions=set_field:0x80000/0x80000->reg4
+3. table=NodePortMark, priority=0 actions=goto_table:SessionAffinity
+```
+
+Flow 1 matches packets destined for the local Node from local Pods. `NodePortRegMark` is loaded, indicating that the
+packets are potentially destined for NodePort Services. We assume only one valid IP address, `192.168.77.102` (the
+Node's transport IP), can serve as the host IP address for NodePort based on the option `antreaProxy.nodePortAddresses`.
+If there are multiple valid IP addresses specified in the option, a flow similar to flow 1 will be installed for each
+IP address.
+
+Flow 2 match packets destined for the *Virtual NodePort DNAT IP*. Packets destined for NodePort Services from the local
+Node or the external network is DNAT'd to the *Virtual NodePort DNAT IP* by iptables before entering the pipeline.
+
+Flow 3 is the table-miss flow.
+
+Note that packets of NodePort Services have not been identified in this table by matching destination IP address. The
+identification of NodePort Services will be done finally in table [ServiceLB] by matching `NodePortRegMark` and the
+the specific destination port of a NodePort.
+
+### SessionAffinity
+
+This table is designed to implement Service session affinity. The learned flows that cache the information of the
+selected Endpoints are installed here.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=SessionAffinity, hard_timeout=300, priority=200,tcp,nw_src=10.10.0.1,nw_dst=10.96.76.15,tp_dst=80 \
+ actions=set_field:0x50/0xffff->reg4,set_field:0/0x4000000->reg4,set_field:0xa0a0001->reg3,set_field:0x20000/0x70000->reg4,set_field:0x200/0x200->reg0
+2. table=SessionAffinity, priority=0 actions=set_field:0x10000/0x70000->reg4
+```
+
+Flow 1 is a learned flow generated by flow 3 in table [ServiceLB], designed for the sample Service [ClusterIP with
+Session Affinity], to implement Service session affinity. Here are some details about the flow:
+
+- The "hard timeout" of the learned flow should be equal to the value of
+ `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` defined in the Service. This means that until the hard
+ timeout expires, this flow is present in the pipeline, and the session affinity of the Service takes effect. Unlike an
+ "idle timeout", the "hard timeout" does not reset whenever the flow is matched.
+- Source IP address, destination IP address, destination port, and transport protocol are used to match packets of
+ connections sourced from the same client and destined for the Service during the affinity time window.
+- Endpoint IP address and Endpoint port are loaded into `EndpointIPField` and `EndpointPortField` respectively.
+- `EpSelectedRegMark` is loaded, indicating that the Service Endpoint selection is done, and ensuring that the packets
+ will only match the last flow in table [ServiceLB].
+- `RewriteMACRegMark`, which will be consumed in table [L3Forwarding], is loaded here, indicating that the source and
+ destination MAC addresses of the packets should be rewritten.
+
+Flow 2 is the table-miss flow to match the first packet of connections destined for Services. The loading of
+`EpToSelectRegMark`, to be consumed in table [ServiceLB], indicating that the packet needs to do Service Endpoint
+selection.
+
+### ServiceLB
+
+This table is used to implement Service Endpoint selection. It addresses specific cases:
+
+1. ClusterIP, as demonstrated in the examples [ClusterIP without Endpoint] and [ClusterIP].
+2. NodePort, as demonstrated in the example [NodePort].
+3. LoadBalancer, as demonstrated in the example [LoadBalancer].
+4. Service configured with external IPs, as demonstrated in the example [Service with ExternalIP].
+5. Service configured with session affinity, as demonstrated in the example [Service with session affinity].
+6. Service configured with externalTrafficPolicy to `Local`, as demonstrated in the example [Service with
+ ExternalTrafficPolicy Local].
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.101.255.29,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0x9->reg7,group:9
+2. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.105.31.235,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0xc->reg7,group:10
+3. table=ServiceLB, priority=200,tcp,reg4=0x90000/0xf0000,tp_dst=30004 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0x200000/0x200000->reg4,set_field:0xc->reg7,group:12
+4. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=192.168.77.150,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0xe->reg7,group:14
+5. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=192.168.77.200,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0x10->reg7,group:16
+6. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.96.76.15,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x30000/0x70000->reg4,set_field:0xa->reg7,group:11
+7. table=ServiceLB, priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.96.76.15,tp_dst=80 actions=learn(table=SessionAffinity,hard_timeout=300,priority=200,delete_learned,cookie=0x203000000000a,\
+ eth_type=0x800,nw_proto=6,NXM_OF_TCP_DST[],NXM_OF_IP_DST[],NXM_OF_IP_SRC[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:NXM_NX_REG4[26]->NXM_NX_REG4[26],load:NXM_NX_REG3[]->NXM_NX_REG3[],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[9]),\
+ set_field:0x20000/0x70000->reg4,goto_table:EndpointDNAT
+8. table=ServiceLB, priority=210,tcp,reg4=0x10010000/0x10070000,nw_dst=192.168.77.151,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0x11->reg7,group:17
+9. table=ServiceLB, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=192.168.77.151,tp_dst=80 actions=set_field:0x200/0x200->reg0,set_field:0x20000/0x70000->reg4,set_field:0x12->reg7,group:18
+10. table=ServiceLB, priority=0 actions=goto_table:EndpointDNAT
+```
+
+Flow 1 and flow 2 are designed for case 1, matching the first packet of connections destined for the sample [ClusterIP
+without Endpoint] or [ClusterIP]. This is achieved by matching `EpToSelectRegMark` loaded in table [SessionAffinity],
+clusterIP, and port. The target of the packet matched by the flow is an OVS group where the Endpoint will be selected.
+Before forwarding the packet to the OVS group, `RewriteMACRegMark`, which will be consumed in table [L3Forwarding], is
+loaded, indicating that the source and destination MAC addresses of the packets should be rewritten. `EpSelectedRegMark`
+, which will be consumed in table [EndpointDNAT], is also loaded, indicating that the Endpoint is selected. Note that the
+Service Endpoint selection is not completed yet, as it will be done in the target OVS group.
+
+Flow 3 is for case 2, matching the first packet of connections destined for the sample [NodePort]. This is achieved by
+matching `EpToSelectRegMark` loaded in table [SessionAffinity], `NodePortRegMark` loaded in table [NodePortMark], and
+NodePort port. Similar to flows 1-2, `RewriteMACRegMark` and `EpSelectedRegMark` are also loaded.
+
+Flow 4 is for case 3, processing the first packet of connections destined for the ingress IP of the sample
+[LoadBalancer], similar to flow 1.
+
+Flow 5 is for case 4, processing the first packet of connections destined for the external IP of the sample [Service
+with ExternalIP], similar to flow 1.
+
+Flow 6 is the initial process for case 5, matching the first packet of connections destined for the sample [Service with
+Session Affinity]. This is achieved by matching the conditions similar to flow 1. Like flow 1, the target of the flow is
+also an OVS group, and `RewriteMACRegMark` is loaded. The difference is that `EpToLearnRegMark` is loaded, rather than
+`EpSelectedRegMark`, indicating that the selected Endpoint needs to be cached.
+
+Flow 7 is the final process for case 5, matching the packet previously matched by flow 6, resubmitted back from the target OVS
+group after selecting an Endpoint. Then a learned flow will be generated in table [SessionAffinity] to match the packets
+of the subsequent connections from the same client IP, ensuring that the packets are always forwarded to the same Endpoint
+selected the first time. `EpSelectedRegMark`, which will be consumed in table [EndpointDNAT], is loaded, indicating that
+Service Endpoint selection has been done.
+
+Flow 8 and flow 9 are for case 6. Flow 8 has higher priority than flow 9, prioritizing matching the first
+packet of connections sourced from a local Pod or the local Node with `FromLocalRegMark` loaded in table [Classifier]
+and destined for the sample [Service with ExternalTrafficPolicy Local]. The target of flow 8 is an OVS group that has
+all the Endpoints across the cluster, ensuring accessibility for Service connections originating from local Pods or
+Nodes, even though `externalTrafficPolicy` is set to `Local` for the Service. Due to the existence of flow 8, consequently,
+flow 9 exclusively matches packets sourced from the external network, resembling the pattern of flow 1. The target of
+flow 9 is an OVS group that has only the local Endpoints since `externalTrafficPolicy` of the Service is `Local`.
+
+Flow 10 is the table-miss flow.
+
+As mentioned above, the Service Endpoint selection is performed within OVS groups. 3 typical OVS groups are listed below:
+
+```text
+1. group_id=9,type=select,\
+ bucket=bucket_id:0,weight:100,actions=set_field:0x4000/0x4000->reg0,resubmit(,EndpointDNAT)
+2. group_id=10,type=select,\
+ bucket=bucket_id:0,weight:100,actions=set_field:0xa0a0018->reg3,set_field:0x50/0xffff->reg4,resubmit(,EndpointDNAT),\
+ bucket=bucket_id:1,weight:100,actions=set_field:0x4000000/0x4000000->reg4,set_field:0xa0a0106->reg3,set_field:0x50/0xffff->reg4,resubmit(,EndpointDNAT)
+3. group_id=11,type=select,\
+ bucket=bucket_id:0,weight:100,actions=set_field:0xa0a0018->reg3,set_field:0x50/0xffff->reg4,resubmit(,ServiceLB),\
+ bucket=bucket_id:1,weight:100,actions=set_field:0x4000000/0x4000000->reg4,set_field:0xa0a0106->reg3,set_field:0x50/0xffff->reg4,resubmit(,ServiceLB)
+```
+
+The first group with `group_id` 9 is the destination of packets matched by flow 1, designed for a Service without
+Endpoints. The group only has a single bucket where `SvcNoEpRegMark` which will be used in table [EndpointDNAT] is
+loaded, indicating that the Service has no Endpoint, and then packets are forwarded to table [EndpointDNAT].
+
+The second group with `group_id` 10 is the destination of packets matched by flow 2, designed for a Service with
+Endpoints. The group has 2 buckets, indicating the availability of 2 selectable Endpoints. Each bucket has an equal
+chance of being chosen since they have the same weights. For every bucket, the Endpoint IP and Endpoint port are loaded
+into `EndpointIPField` and `EndpointPortField`, respectively. These loaded values will be consumed in table
+[EndpointDNAT] to which the packets are forwarded and in which DNAT will be performed. `RemoteEndpointRegMark` is loaded
+for remote Endpoints, like the bucket with `bucket_id` 1 in this group.
+
+The third group with `group_id` 11 is the destination of packets matched by flow 6, designed for a Service that has
+Endpoints and is configured with session affinity. The group closely resembles the group with `group_id` 10, except that
+the destination of the packets is table [ServiceLB], rather than table [EndpointDNAT]. After being resubmitted back to table
+[ServiceLB], they will be matched by flow 7.
+
+### EndpointDNAT
+
+The table implements DNAT for Service connections after Endpoint selection is performed in table [ServiceLB].
+
+If you dump the flows of this table, you may see the following::
+
+```text
+1. table=EndpointDNAT, priority=200,reg0=0x4000/0x4000 actions=controller(reason=no_match,id=62373,userdata=04)
+2. table=EndpointDNAT, priority=200,tcp,reg3=0xa0a0018,reg4=0x20050/0x7ffff actions=ct(commit,table=AntreaPolicyEgressRule,zone=65520,nat(dst=10.10.0.24:80),exec(set_field:0x10/0x10->ct_mark,move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))
+3. table=EndpointDNAT, priority=200,tcp,reg3=0xa0a0106,reg4=0x20050/0x7ffff actions=ct(commit,table=AntreaPolicyEgressRule,zone=65520,nat(dst=10.10.1.6:80),exec(set_field:0x10/0x10->ct_mark,move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))
+4. table=EndpointDNAT, priority=190,reg4=0x20000/0x70000 actions=set_field:0x10000/0x70000->reg4,resubmit(,ServiceLB)
+5. table=EndpointDNAT, priority=0 actions=goto_table:AntreaPolicyEgressRule
+```
+
+Flow 1 is designed for Services without Endpoints. It identifies the first packet of connections destined for such Service
+by matching `SvcNoEpRegMark`. Subsequently, the packet is forwarded to the OpenFlow controller (Antrea Agent). For TCP
+Service traffic, the controller will send a TCP RST, and for all other cases the controller will send an ICMP Destination
+Unreachable message.
+
+Flows 2-3 are designed for Services that have selected an Endpoint. These flows identify the first packet of connections
+destined for such Services by matching `EndpointPortField`, which stores the Endpoint IP, and `EpUnionField` (a combination
+of `EndpointPortField` storing the Endpoint port and `EpSelectedRegMark`). Then `ct` action is invoked on the packet,
+performing DNAT'd and forwarding it to table [ConntrackState] with the "tracked" state associated with `CtZone`.
+Some bits of ct mark are persisted:
+
+- `ServiceCTMark`, to be consumed in tables [L3Forwarding] and [ConntrackCommit], indicating that the current packet and
+ subsequent packets of the connection are for a Service.
+- The value of `PktSourceField` is persisted to `ConnSourceCTMarkField`, storing the source of the connection for the
+ current packet and subsequent packets of the connection.
+
+Flow 4 is to resubmit the packets which are not matched by flows 1-3 back to table [ServiceLB] to select Endpoint again.
+
+Flow 5 is the table-miss flow to match non-Service packets.
+
+### AntreaPolicyEgressRule
+
+This table is used to implement the egress rules across all Antrea-native NetworkPolicies, except for NetworkPolicies
+that are created in the Baseline Tier. Antrea-native NetworkPolicies created in the Baseline Tier will be enforced after
+K8s NetworkPolicies and their egress rules are installed in tables [EgressDefaultRule] and [EgressRule] respectively, i.e.
+
+```text
+Antrea-native NetworkPolicy other Tiers -> AntreaPolicyEgressRule
+K8s NetworkPolicy -> EgressRule
+Antrea-native NetworkPolicy Baseline Tier -> EgressDefaultRule
+```
+
+Antrea-native NetworkPolicy relies on the OVS built-in `conjunction` action to implement policies efficiently. This
+enables us to do a conjunctive match across multiple dimensions (source IP, destination IP, port, etc.) efficiently
+without "exploding" the number of flows. For our use case, we have at most 3 dimensions.
+
+The only requirement of `conj_id` is to be a unique 32-bit integer within the table. At the moment we use a single
+custom allocator, which is common to all tables that can have NetworkPolicy flows installed
+([AntreaPolicyEgressRule], [EgressRule], [EgressDefaultRule], [AntreaPolicyIngressRule], [IngressRule], and
+[IngressDefaultRule]).
+
+For this table, you will need to keep in mind the Antrea-native NetworkPolicy
+[specification](#antrea-native-networkpolicy-implementation). Since the sample egress policy resides in the Application
+Tie, if you dump the flows of this table, you may see the following:
+
+```text
+1. table=AntreaPolicyEgressRule, priority=64990,ct_state=-new+est,ip actions=goto_table:EgressMetric
+2. table=AntreaPolicyEgressRule, priority=64990,ct_state=-new+rel,ip actions=goto_table:EgressMetric
+3. table=AntreaPolicyEgressRule, priority=14500,ip,nw_src=10.10.0.24 actions=conjunction(7,1/3)
+4. table=AntreaPolicyEgressRule, priority=14500,ip,nw_dst=10.10.0.25 actions=conjunction(7,2/3)
+5. table=AntreaPolicyEgressRule, priority=14500,tcp,tp_dst=3306 actions=conjunction(7,3/3)
+6. table=AntreaPolicyEgressRule, priority=14500,conj_id=7,ip actions=set_field:0x7->reg5,ct(commit,table=EgressMetric,zone=65520,exec(set_field:0x700000000/0xffffffff00000000->ct_label))
+7. table=AntreaPolicyEgressRule, priority=14499,ip,nw_src=10.10.0.24 actions=conjunction(5,1/2)
+8. table=AntreaPolicyEgressRule, priority=14499,ip actions=conjunction(5,2/2)
+9. table=AntreaPolicyEgressRule, priority=14499,conj_id=5 actions=set_field:0x5->reg3,set_field:0x400/0x400->reg0,goto_table:EgressMetric
+10. table=AntreaPolicyEgressRule, priority=0 actions=goto_table:EgressRule
+```
+
+Flows 1-2, which are installed by default with the highest priority, match non-new and "tracked" packets and
+forward them to table [EgressMetric] to bypass the check from egress rules. This means that if a connection is
+established, its packets go straight to table [EgressMetric], with no other match required. In particular, this ensures
+that reply traffic is never dropped because of an Antrea-native NetworkPolicy or K8s NetworkPolicy rule. However, this
+also means that ongoing connections are not affected if the Antrea-native NetworkPolicy or the K8s NetworkPolicy is
+updated.
+
+The priorities of flows 3-9 installed for the egress rules are decided by the following:
+
+- The `spec.tier` value in an Antrea-native NetworkPolicy determines the primary level for flow priority.
+- The `spec.priority` value in an Antrea-native NetworkPolicy determines the secondary level for flow priority within
+ the same `spec.tier`. A lower value in this field corresponds to a higher priority for the flow.
+- The rule's position within an Antrea-native NetworkPolicy also influences flow priority. Rules positioned closer to
+ the beginning have higher priority for the flow.
+
+Flows 3-6, whose priorities are all 14500, are installed for the egress rule `AllowToDB` in the sample policy. These
+flows are described as follows:
+
+- Flow 3 is used to match packets with the source IP address in set {10.10.0.24}, which has all IP addresses of the Pods
+ selected by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 7.
+- Flow 4 is used to match packets with the destination IP address in set {10.10.0.25}, which has all IP addresses of
+ the Pods selected by the label `app: db`, constituting the second dimension for `conjunction` with `conj_id` 7.
+- Flow 5 is used to match packets with the destination TCP port in set {3306} specified in the rule, constituting the
+ third dimension for `conjunction` with `conj_id` 7.
+- Flow 6 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 7 and forward them
+ to table [EgressMetric], persisting `conj_id` to `EgressRuleCTLabel`, which will be consumed in table [EgressMetric].
+
+Flows 7-9, whose priorities are all 14499, are installed for the egress rule with a `Drop` action defined after the rule
+`AllowToDB` in the sample policy, and serves as a default rule. Antrea-native NetworkPolicy does not have the same
+default isolated behavior as K8s NetworkPolicy (implemented in the [EgressDefaultRule] table). As soon as a rule is
+matched, we apply the corresponding action. If no rule is matched, there is no implicit drop for Pods to which an
+Antrea-native NetworkPolicy applies. These flows are described as follows:
+
+- Flow 7 is used to match packets with the source IP address in set {10.10.0.24}, which is from the Pods selected
+ by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 5.
+- Flow 8 is used to match any IP packets, constituting the second dimension for `conjunction` with `conj_id` 5. This
+ flow, which matches all IP packets, exists because we need at least 2 dimensions for a conjunctive match.
+- Flow 9 is used to match packets meeting both dimensions of `conjunction` with `conj_id` 5. `APDenyRegMark` is
+ loaded and will be consumed in table [EgressMetric] to which the packets are forwarded.
+
+Flow 10 is the table-miss flow to forward packets not matched by other flows to table [EgressMetric].
+
+### EgressRule
+
+For this table, you will need to keep in mind the K8s NetworkPolicy
+[specification](#kubernetes-networkpolicy-implementation) that we are using.
+
+This table is used to implement the egress rules across all K8s NetworkPolicies. If you dump the flows for this table,
+you may see the following:
+
+```text
+1. table=EgressRule, priority=200,ip,nw_src=10.10.0.24 actions=conjunction(2,1/3)
+2. table=EgressRule, priority=200,ip,nw_dst=10.10.0.25 actions=conjunction(2,2/3)
+3. table=EgressRule, priority=200,tcp,tp_dst=3306 actions=conjunction(2,3/3)
+4. table=EgressRule, priority=190,conj_id=2,ip actions=set_field:0x2->reg5,ct(commit,table=EgressMetric,zone=65520,exec(set_field:0x200000000/0xffffffff00000000->ct_label))
+5. table=EgressRule, priority=0 actions=goto_table:EgressDefaultRule
+```
+
+Flows 1-4 are installed for the egress rule in the sample K8s NetworkPolicy. These flows are described as follows:
+
+- Flow 1 is to match packets with the source IP address in set {10.10.0.24}, which has all IP addresses of the Pods
+ selected by the label `app: web` in the `default` Namespace, constituting the first dimension for `conjunction` with `conj_id` 2.
+- Flow 2 is to match packets with the destination IP address in set {10.10.0.25}, which has all IP addresses of the Pods
+ selected by the label `app: db` in the `default` Namespace, constituting the second dimension for `conjunction` with `conj_id` 2.
+- Flow 3 is to match packets with the destination TCP port in set {3306} specified in the rule, constituting the third
+ dimension for `conjunction` with `conj_id` 2.
+- Flow 4 is to match packets meeting all the three dimensions of `conjunction` with `conj_id` 2 and forward them to
+ table [EgressMetric], persisting `conj_id` to `EgressRuleCTLabel`.
+
+Flow 5 is the table-miss flow to forward packets not matched by other flows to table [EgressDefaultRule].
+
+### EgressDefaultRule
+
+This table complements table [EgressRule] for K8s NetworkPolicy egress rule implementation. When a NetworkPolicy is
+applied to a set of Pods, then the default behavior for egress connections for these Pods becomes "deny" (they become [isolated
+Pods](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)).
+This table is in charge of dropping traffic originating from Pods to which a NetworkPolicy (with an egress rule) is
+applied, and which did not match any of the "allowed" list rules.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=EgressDefaultRule, priority=200,ip,nw_src=10.10.0.24 actions=drop
+2. table=EgressDefaultRule, priority=0 actions=goto_table:EgressMetric
+```
+
+Flow 1, based on our sample K8s NetworkPolicy, is to drop traffic originating from 10.10.0.24, an IP address associated
+with a Pod selected by the label `app: web`. If there are multiple Pods being selected by the label `app: web`, you will
+see multiple similar flows for each IP address.
+
+Flow 2 is the table-miss flow to forward packets to table [EgressMetric].
+
+This table is also used to implement Antrea-native NetworkPolicy egress rules that are created in the Baseline Tier.
+Since the Baseline Tier is meant to be enforced after K8s NetworkPolicies, the corresponding flows will be created at a
+lower priority than K8s NetworkPolicy default drop flows. These flows are similar to flows 3-9 in table
+[AntreaPolicyEgressRule]. For the sake of simplicity, we have not defined any example Baseline policies in this document.
+
+### EgressMetric
+
+This table is used to collect egress metrics for Antrea-native NetworkPolicies and K8s NetworkPolicies.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=EgressMetric, priority=200,ct_state=+new,ct_label=0x200000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding
+2. table=EgressMetric, priority=200,ct_state=-new,ct_label=0x200000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding
+3. table=EgressMetric, priority=200,ct_state=+new,ct_label=0x700000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding
+4. table=EgressMetric, priority=200,ct_state=-new,ct_label=0x700000000/0xffffffff00000000,ip actions=goto_table:L3Forwarding
+5. table=EgressMetric, priority=200,reg0=0x400/0x400,reg3=0x5 actions=drop
+6. table=EgressMetric, priority=0 actions=goto_table:L3Forwarding
+```
+
+Flows 1-2, matching packets with `EgressRuleCTLabel` set to 2, the `conj_id` allocated for the sample K8s NetworkPolicy
+egress rule and loaded in table [EgressRule] flow 4, are used to collect metrics for the egress rule.
+
+Flows 3-4, matching packets with `EgressRuleCTLabel` set to 7, the `conj_id` allocated for the sample Antrea-native
+NetworkPolicy egress rule and loaded in table [AntreaPolicyEgressRule] flow 6, are used to collect metrics for the
+egress rule.
+
+Flow 5 serves as the drop rule for the sample Antrea-native NetworkPolicy egress rule. It drops the packets by matching
+`APDenyRegMark` loaded in table [AntreaPolicyEgressRule] flow 9 and `APConjIDField` set to 5 which is the `conj_id`
+allocated the egress rule and loaded in table [AntreaPolicyEgressRule] flow 9.
+
+These flows have no explicit action besides the `goto_table` action. This is because we rely on the "implicit" flow
+counters to keep track of connection / packet statistics.
+
+Ct label is used in flows 1-4, while reg is used in flow 5. The distinction lies in the fact that the value persisted in
+the ct label can be read throughout the entire lifecycle of a connection, but the reg mark is only valid for the current
+packet. For a connection permitted by a rule, all its packets should be collected for metrics, thus a ct label is used.
+For a connection denied or dropped by a rule, the first packet and the subsequent retry packets will be blocked,
+therefore a reg is enough.
+
+Flow 6 is the table-miss flow.
+
+### L3Forwarding
+
+This table, designated as the L3 routing table, serves to assign suitable source and destination MAC addresses to
+packets based on their destination IP addresses, as well as their reg marks or ct marks.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=L3Forwarding, priority=210,ip,nw_dst=10.10.0.1 actions=set_field:ba:5e:d1:55:aa:c0->eth_dst,set_field:0x20/0xf0->reg0,goto_table:L3DecTTL
+2. table=L3Forwarding, priority=210,ct_state=+rpl+trk,ct_mark=0x2/0xf,ip actions=set_field:ba:5e:d1:55:aa:c0->eth_dst,set_field:0x20/0xf0->reg0,goto_table:L3DecTTL
+3. table=L3Forwarding, priority=200,ip,reg0=0/0x200,nw_dst=10.10.0.0/24 actions=goto_table:L2ForwardingCalc
+4. table=L3Forwarding, priority=200,ip,nw_dst=10.10.1.0/24 actions=set_field:ba:5e:d1:55:aa:c0->eth_src,set_field:aa:bb:cc:dd:ee:ff->eth_dst,set_field:192.168.77.103->tun_dst,set_field:0x10/0xf0->reg0,goto_table:L3DecTTL
+5. table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.24 actions=set_field:ba:5e:d1:55:aa:c0->eth_src,set_field:fa:b7:53:74:21:a6->eth_dst,goto_table:L3DecTTL
+6. table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.25 actions=set_field:ba:5e:d1:55:aa:c0->eth_src,set_field:36:48:21:a2:9d:b4->eth_dst,goto_table:L3DecTTL
+7. table=L3Forwarding, priority=200,ip,reg0=0x200/0x200,nw_dst=10.10.0.26 actions=set_field:ba:5e:d1:55:aa:c0->eth_src,set_field:5e:b5:e3:a6:90:b7->eth_dst,goto_table:L3DecTTL
+8. table=L3Forwarding, priority=190,ct_state=-rpl+trk,ip,reg0=0x3/0xf,reg4=0/0x100000 actions=goto_table:EgressMark
+9. table=L3Forwarding, priority=190,ct_state=-rpl+trk,ip,reg0=0x1/0xf actions=set_field:ba:5e:d1:55:aa:c0->eth_dst,goto_table:EgressMark
+10. table=L3Forwarding, priority=190,ct_mark=0x10/0x10,reg0=0x202/0x20f actions=set_field:ba:5e:d1:55:aa:c0->eth_dst,set_field:0x20/0xf0->reg0,goto_table:L3DecTTL
+11. table=L3Forwarding, priority=0 actions=set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+```
+
+Flow 1 matches packets destined for the local Antrea gateway IP, rewrites their destination MAC address to that of the
+local Antrea gateway, loads `ToGatewayRegMark`, and forwards them to table [L3DecTTL] to decrease TTL value. The action
+of rewriting the destination MAC address is not necessary but not harmful for Pod-to-gateway request packets because the
+destination MAC address is already the local gateway MAC address. In short, the action is only necessary for
+`AntreaIPAM` Pods, not required by the sample NodeIPAM Pods in this document.
+
+Flow 2 matches reply packets with corresponding ct "tracked" states and `FromGatewayCTMark` from connections initiated
+through the local Antrea gateway. In other words, these are connections for which the first packet of the connection
+(SYN packet for TCP) was received through the local Antrea gateway. It rewrites the destination MAC address to
+that of the local Antrea gateway, loads `ToGatewayRegMark`, and forwards them to table [L3DecTTL]. This ensures that
+reply packets can be forwarded back to the local Antrea gateway in subsequent tables. This flow is required to handle
+the following cases when Antrea Proxy is not enabled:
+
+- Reply traffic for connections from a local Pod to a ClusterIP Service, which are handled by kube-proxy and go through
+ DNAT. In this case, the destination IP address of the reply traffic is the Pod which initiated the connection to the
+ Service (no SNAT by kube-proxy). These packets should be forwarded back to the local Antrea gateway to the third-party module
+ to complete the DNAT processes, e.g., kube-proxy. The destination MAC of the packets is rewritten in the table to
+ avoid it is forwarded to the original client Pod by mistake.
+- When hairpin is involved, i.e. connections between 2 local Pods, for which NAT is performed. One example is a
+ Pod accessing a NodePort Service for which externalTrafficPolicy is set to `Local` using the local Node's IP address,
+ as there will be no SNAT for such traffic. Another example could be hostPort support, depending on how the feature
+ is implemented.
+
+Flow 3 matches packets from intra-Node connections (excluding Service connections) and marked with
+`NotRewriteMACRegMark`, indicating that the destination and source MACs of packets should not be overwritten, and
+forwards them to table [L2ForwardingCalc] instead of table [L3DecTTL]. The deviation is due to local Pods connections
+not traversing any router device or undergoing NAT process. For packets from Service or inter-Node connections,
+`RewriteMACRegMark`, mutually exclusive with `NotRewriteMACRegMark`, is loaded. Therefore, the packets will not be
+matched by the flow.
+
+Flow 4 is designed to match packets destined for a remote Pod CIDR. This involves installing a separate flow for each remote
+Node, with each flow matching the destination IP address of the packets against the Pod subnet for the respective Node.
+For the matched packets, the source MAC address is set to that of the local Antrea gateway MAC, and the destination
+MAC address is set to the *Global Virtual MAC*. The Openflow `tun_dst` field is set to the appropriate value (i.e.
+the IP address of the remote Node). Additionally, `ToTunnelRegMark` is loaded, signifying that the packets will be
+forwarded to remote Nodes through a tunnel. The matched packets are then forwarded to table [L3DecTTL] to decrease the TTL
+value.
+
+Flow 5-7 matches packets destined for local Pods and marked by `RewriteMACRegMark`, which signifies that the packets may
+originate from Service or inter-Node connections. For the matched packets, the source MAC address is set to that of the
+local Antrea gateway MAC, and the destination MAC address is set to the associated local Pod MAC address. The matched
+packets are then forwarded to table [L3DecTTL] to decrease the TTL value.
+
+Flow 8 matches request packets originating from local Pods and destined for the external network, and then forwards them
+to table [EgressMark] dedicated to feature `Egress`. In table [EgressMark], SNAT IPs for Egress are looked up for the packets.
+To match the expected packets, `FromPodRegMark` is used to exclude packets that are not from local Pods.
+Additionally, `NotAntreaFlexibleIPAMRegMark`, mutually exclusive with `AntreaFlexibleIPAMRegMark` which is used to mark
+packets from Antrea IPAM Pods, is used since Egress can only be applied to Node IPAM Pods.
+
+It's worth noting that packets sourced from local Pods and destined for the Services listed in the option
+`antreaProxy.skipServices` are unexpectedly matched by flow 8 due to the fact that there is no flow in [ServiceLB]
+to handle these Services. Consequently, the destination IP address of the packets, allocated from the Service CIDR,
+is considered part of the "external network". No need to worry about the mismatch, as flow 3 in table [EgressMark]
+is designed to match these packets and prevent them from undergoing SNAT by Egress.
+
+Flow 9 matches request packets originating from remote Pods and destined for the external network, and then forwards them
+to table [EgressMark] dedicated to feature `Egress`. To match the expected packets, `FromTunnelRegMark` is used to
+include packets that are from remote Pods through a tunnel. Considering that the packets from remote Pods traverse a
+tunnel, the destination MAC address of the packets, represented by the *Global Virtual MAC*, needs to be rewritten to
+MAC address of the local Antrea gateway.
+
+Flow 10 matches packets from Service connections that are originating from the local Antrea gateway and destined for the
+external network. This is accomplished by matching `RewriteMACRegMark`, `FromGatewayRegMark`, and `ServiceCTMark`. The
+destination MAC address is then set to that of the local Antrea gateway. Additionally, `ToGatewayRegMark`, which will be
+used with `FromGatewayRegMark` together to identify hairpin connections in table [SNATMark], is loaded. Finally,
+the packets are forwarded to table [L3DecTTL].
+
+Flow 11 is the table-miss flow, and is used for packets originating from local Pods and destined for the external network, and
+then forwarding them to table [L2ForwardingCalc]. `ToGatewayRegMark` is loaded as the matched packets traverse the
+local Antrea gateway.
+
+### EgressMark
+
+This table is dedicated to feature `Egress`. It includes flows to select the right SNAT IPs for egress traffic
+originating from Pods and destined for the external network.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=EgressMark, priority=210,ip,nw_dst=192.168.77.102 actions=set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+2. table=EgressMark, priority=210,ip,nw_dst=192.168.77.103 actions=set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+3. table=EgressMark, priority=210,ip,nw_dst=10.96.0.0/12 actions=set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+4. table=EgressMark, priority=200,ip,in_port="client-6-3353ef" actions=set_field:ba:5e:d1:55:aa:c0->eth_src,set_field:aa:bb:cc:dd:ee:ff->eth_dst,set_field:192.168.77.113->tun_dst,set_field:0x10/0xf0->reg0,set_field:0x80000/0x80000->reg0,goto_table:L2ForwardingCalc
+5. table=EgressMark, priority=200,ct_state=+new+trk,ip,tun_dst=192.168.77.112 actions=set_field:0x1/0xff->pkt_mark,set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+6. table=EgressMark, priority=200,ct_state=+new+trk,ip,in_port="web-7975-274540" actions=set_field:0x1/0xff->pkt_mark,set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+7. table=EgressMark, priority=190,ct_state=+new+trk,ip,reg0=0x1/0xf actions=drop
+8. table=EgressMark, priority=0 actions=set_field:0x20/0xf0->reg0,goto_table:L2ForwardingCalc
+```
+
+Flows 1-2 match packets originating from local Pods and destined for the transport IP of remote Nodes, and then forward
+them to table [L2ForwardingCalc] to bypass Egress SNAT. `ToGatewayRegMark` is loaded, indicating that the output port
+of the packets is the local Antrea gateway.
+
+Flow 3 matches packets originating from local Pods and destined for the Services listed in the option
+`antreaProxy.skipServices`, and then forwards them to table [L2ForwardingCalc] to bypass Egress SNAT. Similar to flows
+1-2, `ToGatewayRegMark` is also loaded.
+
+The packets, matched by flows 1-3, are forwarded to this table by flow 8 in table [L3Forwarding], as they are classified
+as part of traffic destined for the external network. However, these packets are not intended to undergo Egress SNAT.
+Consequently, flows 1-3 are used to bypass Egress SNAT for these packets.
+
+Flow 4 match packets originating from local Pods selected by the sample [Egress egress-client], whose SNAT IP is configured
+on a remote Node, which means that the matched packets should be forwarded to the remote Node through a tunnel. Before
+sending the packets to the tunnel, the source and destination MAC addresses are set to the local Antrea gateway MAC
+and the *Global Virtual MAC* respectively. Additionally, `ToTunnelRegMark`, indicating that the output port is a tunnel,
+and `EgressSNATRegMark`, indicating that packets should undergo SNAT on a remote Node, are loaded. Finally, the packets
+are forwarded to table [L2ForwardingCalc].
+
+Flow 5 matches the first packet of connections originating from remote Pods selected by the sample [Egress egress-web]
+whose SNAT IP is configured on the local Node, and then loads an 8-bit ID allocated for the associated SNAT IP defined
+in the sample Egress to the `pkt_mark`, which will be consumed by iptables on the local Node to perform SNAT with the
+SNAT IP. Subsequently, `ToGatewayRegMark`, indicating that the output port is the local Antrea gateway, is loaded.
+Finally, the packets are forwarded to table [L2ForwardingCalc].
+
+Flow 6 matches the first packet of connections originating from local Pods selected by the sample [Egress egress-web],
+whose SNAT IP is configured on the local Node. Similar to flow 4, the 8-bit ID allocated for the SNAT IP is loaded to
+`pkt_mark`, `ToGatewayRegMark` is loaded, and the packets are forwarded to table [L2ForwardingCalc] finally.
+
+Flow 7 drops all other packets tunneled from remote Nodes (identified with `FromTunnelRegMark`, indicating that the packets are
+from remote Pods through a tunnel). The packets are not matched by any flows 1-6, which means that they are here
+unexpected and should be dropped.
+
+Flow 8 is the table-miss flow, which matches "tracked" and non-new packets from Egress connections and forwards
+them to table [L2ForwardingCalc]. `ToGatewayRegMark` is also loaded for these packets.
+
+### L3DecTTL
+
+This is the table to decrement TTL for IP packets.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=L3DecTTL, priority=210,ip,reg0=0x2/0xf actions=goto_table:SNATMark
+2. table=L3DecTTL, priority=200,ip actions=dec_ttl,goto_table:SNATMark
+3. table=L3DecTTL, priority=0 actions=goto_table:SNATMark
+```
+
+Flow 1 matches packets with `FromGatewayRegMark`, which means that these packets enter the OVS pipeline from the local
+Antrea gateway, as the host IP stack should have decremented the TTL already for such packets, TTL should not be
+decremented again.
+
+Flow 2 is to decrement TTL for packets which are not matched by flow 1.
+
+Flow 3 is the table-miss flow that should remain unused.
+
+### SNATMark
+
+This table marks connections requiring SNAT within the OVS pipeline, distinct from Egress SNAT handled by iptables.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=SNATMark, priority=200,ct_state=+new+trk,ip,reg0=0x22/0xff actions=ct(commit,table=SNAT,zone=65520,exec(set_field:0x20/0x20->ct_mark,set_field:0x40/0x40->ct_mark))
+2. table=SNATMark, priority=200,ct_state=+new+trk,ip,reg0=0x12/0xff,reg4=0x200000/0x2200000 actions=ct(commit,table=SNAT,zone=65520,exec(set_field:0x20/0x20->ct_mark))
+3. table=SNATMark, priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.23,nw_dst=10.10.0.23 actions=ct(commit,table=SNAT,zone=65520,exec(set_field:0x20/0x20->ct_mark,set_field:0x40/0x40->ct_mark))
+4. table=SNATMark, priority=190,ct_state=+new+trk,ip,nw_src=10.10.0.24,nw_dst=10.10.0.24 actions=ct(commit,table=SNAT,zone=65520,exec(set_field:0x20/0x20->ct_mark,set_field:0x40/0x40->ct_mark))
+5. table=SNATMark, priority=0 actions=goto_table:SNAT
+```
+
+Flow 1 matches the first packet of hairpin Service connections, identified by `FromGatewayRegMark` and `ToGatewayRegMark`,
+indicating that both the input and output ports of the connections are the local Antrea gateway port. Such hairpin
+connections will undergo SNAT with the *Virtual Service IP* in table [SNAT]. Before forwarding the packets to table
+[SNAT], `ConnSNATCTMark`, indicating that the connection requires SNAT, and `HairpinCTMark`, indicating that this is
+a hairpin connection, are persisted to mark the connections. These two ct marks will be consumed in table [SNAT].
+
+Flow 2 matches the first packet of Service connections requiring SNAT, identified by `FromGatewayRegMark` and
+`ToTunnelRegMark`, indicating that the input port is the local Antrea gateway and the output port is a tunnel. Such
+connections will undergo SNAT with the IP address of the local Antrea gateway in table [SNAT]. Before forwarding the
+packets to table [SNAT], `ToExternalAddressRegMark` and `NotDSRServiceRegMark` are loaded, indicating that the packets
+are destined for a Service's external IP, like NodePort, LoadBalancerIP or ExternalIP, but it is not DSR mode.
+Additionally, `ConnSNATCTMark`, indicating that the connection requires SNAT, is persisted to mark the connections.
+
+It's worth noting that flows 1-2 are specific to `proxyAll`, but they are harmless when `proxyAll` is disabled since
+these flows should be never matched by in-cluster Service traffic.
+
+Flow 3-4 match the first packet of hairpin Service connections, identified by the same source and destination Pod IP
+addresses. Such hairpin connections will undergo SNAT with the IP address of the local Antrea gateway in table [SNAT].
+Similar to flow 1, `ConnSNATCTMark` and `HairpinCTMark` are persisted to mark the connections.
+
+Flow 5 is the table-miss flow.
+
+### SNAT
+
+This table performs SNAT for connections requiring SNAT within the pipeline.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=SNAT, priority=200,ct_state=+new+trk,ct_mark=0x40/0x40,ip,reg0=0x2/0xf actions=ct(commit,table=L2ForwardingCalc,zone=65521,nat(src=169.254.0.253),exec(set_field:0x10/0x10->ct_mark,set_field:0x40/0x40->ct_mark))
+2. table=SNAT, priority=200,ct_state=+new+trk,ct_mark=0x40/0x40,ip,reg0=0x3/0xf actions=ct(commit,table=L2ForwardingCalc,zone=65521,nat(src=10.10.0.1),exec(set_field:0x10/0x10->ct_mark,set_field:0x40/0x40->ct_mark))
+3. table=SNAT, priority=200,ct_state=-new-rpl+trk,ct_mark=0x20/0x20,ip actions=ct(table=L2ForwardingCalc,zone=65521,nat)
+4. table=SNAT, priority=190,ct_state=+new+trk,ct_mark=0x20/0x20,ip,reg0=0x2/0xf actions=ct(commit,table=L2ForwardingCalc,zone=65521,nat(src=10.10.0.1),exec(set_field:0x10/0x10->ct_mark))
+5. table=SNAT, priority=0 actions=goto_table:L2ForwardingCalc
+```
+
+Flow 1 matches the first packet of hairpin Service connections through the local Antrea gateway, identified by
+`HairpinCTMark` and `FromGatewayRegMark`. It performs SNAT with the *Virtual Service IP* `169.254.0.253` and forwards
+the SNAT'd packets to table [L2ForwardingCalc]. Before SNAT, the "tracked" state of packets is associated with `CtZone`.
+After SNAT, their "track" state is associated with `SNATCtZone`, and then `ServiceCTMark` and `HairpinCTMark` persisted
+in `CtZone` are not accessible anymore. As a result, `ServiceCTMark` and `HairpinCTMark` need to be persisted once
+again, but this time they are persisted in `SNATCtZone` for subsequent tables to consume.
+
+Flow 2 matches the first packet of hairpin Service connection originating from local Pods, identified by `HairpinCTMark`
+and `FromPodRegMark`. It performs SNAT with the IP address of the local Antrea gateway and forwards the SNAT'd packets
+to table [L2ForwardingCalc]. Similar to flow 1, `ServiceCTMark` and `HairpinCTMark` are persisted in `SNATCtZone`.
+
+Flow 3 matches the subsequent request packets of connections for which SNAT was performed for the first packet, and then
+invokes `ct` action on the packets again to restore the "tracked" state in `SNATCtZone`. The packets with the appropriate
+"tracked" state are forwarded to table [L2ForwardingCalc].
+
+Flow 4 matches the first packet of Service connections requiring SNAT, identified by `ConnSNATCTMark` and
+`FromGatewayRegMark`, indicating the connection is destined for an external Service IP initiated through the
+Antrea gateway and the Endpoint is a remote Pod. It performs SNAT with the IP address of the local Antrea gateway and
+forwards the SNAT'd packets to table [L2ForwardingCalc]. Similar to other flow 1 or 2, `ServiceCTMark` is persisted in
+`SNATCtZone`.
+
+Flow 5 is the table-miss flow.
+
+### L2ForwardingCalc
+
+This is essentially the "dmac" table of the switch. We program one flow for each port (tunnel port, the local Antrea
+gateway port, and local Pod ports).
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=L2ForwardingCalc, priority=200,dl_dst=ba:5e:d1:55:aa:c0 actions=set_field:0x2->reg1,set_field:0x200000/0x600000->reg0,goto_table:TrafficControl
+2. table=L2ForwardingCalc, priority=200,dl_dst=aa:bb:cc:dd:ee:ff actions=set_field:0x1->reg1,set_field:0x200000/0x600000->reg0,goto_table:TrafficControl
+3. table=L2ForwardingCalc, priority=200,dl_dst=5e:b5:e3:a6:90:b7 actions=set_field:0x24->reg1,set_field:0x200000/0x600000->reg0,goto_table:TrafficControl
+4. table=L2ForwardingCalc, priority=200,dl_dst=fa:b7:53:74:21:a6 actions=set_field:0x25->reg1,set_field:0x200000/0x600000->reg0,goto_table:TrafficControl
+5. table=L2ForwardingCalc, priority=200,dl_dst=36:48:21:a2:9d:b4 actions=set_field:0x26->reg1,set_field:0x200000/0x600000->reg0,goto_table:TrafficControl
+6. table=L2ForwardingCalc, priority=0 actions=goto_table:TrafficControl
+```
+
+Flow 1 matches packets destined for the local Antrea gateway, identified by the destination MAC address being that of
+the local Antrea gateway. It loads `OutputToOFPortRegMark`, indicating that the packets should output to an OVS port,
+and also loads the port number of the local Antrea gateway to `TargetOFPortField`. Both of these two values will be consumed
+in table [Output].
+
+Flow 2 matches packets destined for a tunnel, identified by the destination MAC address being that of the *Global Virtual
+MAC*. Similar to flow 1, `OutputToOFPortRegMark` is loaded, and the port number of the tunnel is loaded to
+`TargetOFPortField`.
+
+Flows 3-5 match packets destined for local Pods, identified by the destination MAC address being that of one of the local
+Pods. Similar to flow 1, `OutputToOFPortRegMark` is loaded, and the port number of the local Pods is loaded to
+`TargetOFPortField`.
+
+Flow 6 is the table-miss flow.
+
+### TrafficControl
+
+This table is dedicated to `TrafficControl`.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=TrafficControl, priority=210,reg0=0x200006/0x60000f actions=goto_table:Output
+2. table=TrafficControl, priority=200,reg1=0x25 actions=set_field:0x22->reg9,set_field:0x800000/0xc00000->reg4,goto_table:IngressSecurityClassifier
+3. table=TrafficControl, priority=200,in_port="web-7975-274540" actions=set_field:0x22->reg9,set_field:0x800000/0xc00000->reg4,goto_table:IngressSecurityClassifier
+4. table=TrafficControl, priority=200,reg1=0x26 actions=set_field:0x27->reg9,set_field:0x400000/0xc00000->reg4,goto_table:IngressSecurityClassifier
+5. table=TrafficControl, priority=200,in_port="db-755c6-5080e3" actions=set_field:0x27->reg9,set_field:0x400000/0xc00000->reg4,goto_table:IngressSecurityClassifier
+6. table=TrafficControl, priority=0 actions=goto_table:IngressSecurityClassifier
+```
+
+Flow 1 matches packets returned from TrafficControl return ports and forwards them to table [Output], where the packets
+are output to the port to which they are destined. To identify such packets, `OutputToOFPortRegMark`, indicating that
+the packets should be output to an OVS port, and `FromTCReturnRegMark` loaded in table [Classifier], indicating that
+the packets are from a TrafficControl return port, are used.
+
+Flows 2-3 are installed for the sample [TrafficControl redirect-web-to-local] to mark the packets associated with the
+Pods labeled by `app: web` using `TrafficControlRedirectRegMark`. Flow 2 handles the ingress direction, while flow 3
+handles the egress direction. In table [Output], these packets will be redirected to a TrafficControl target port
+specified in `TrafficControlTargetOFPortField`, of which value is loaded in these 2 flows.
+
+Flows 4-5 are installed for the sample [TrafficControl mirror-db-to-local] to mark the packets associated with the Pods
+labeled by `app: db` using `TrafficControlMirrorRegMark`. Similar to flows 2-3, flows 4-5 also handles the two directions.
+In table [Output], these packets will be mirrored (duplicated) to a TrafficControl target port specified in
+`TrafficControlTargetOFPortField`, of which value is loaded in these 2 flows.
+
+Flow 6 is the table-miss flow.
+
+### IngressSecurityClassifier
+
+This table is to classify packets before they enter the tables for ingress security.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=IngressSecurityClassifier, priority=210,pkt_mark=0x80000000/0x80000000,ct_state=-rpl+trk,ip actions=goto_table:ConntrackCommit
+2. table=IngressSecurityClassifier, priority=201,reg4=0x80000/0x80000 actions=goto_table:AntreaPolicyIngressRule
+3. table=IngressSecurityClassifier, priority=200,reg0=0x20/0xf0 actions=goto_table:IngressMetric
+4. table=IngressSecurityClassifier, priority=200,reg0=0x10/0xf0 actions=goto_table:IngressMetric
+5. table=IngressSecurityClassifier, priority=200,reg0=0x40/0xf0 actions=goto_table:IngressMetric
+6. table=IngressSecurityClassifier, priority=200,ct_mark=0x40/0x40 actions=goto_table:ConntrackCommit
+7. table=IngressSecurityClassifier, priority=0 actions=goto_table:AntreaPolicyIngressRule
+```
+
+Flow 1 matches locally generated request packets for liveness/readiness probes from kubelet, identified by `pkt_mark`
+which is set by iptables in the host network namespace. It forwards the packets to table [ConntrackCommit] directly to
+bypass all tables for ingress security.
+
+Flow 2 matches packets destined for NodePort Services and forwards them to table [AntreaPolicyIngressRule] to enforce
+Antrea-native NetworkPolicies applied to NodePort Services. Without this flow, if the selected Endpoint is not a local
+Pod, the packets might be matched by one of the flows 3-5, skipping table [AntreaPolicyIngressRule].
+
+Flows 3-5 matches packets destined for the local Antrea gateway, tunnel, uplink port with `ToGatewayRegMark`,
+`ToTunnelRegMark` or `ToUplinkRegMark`, respectively, and forwards them to table [IngressMetric] directly to bypass
+all tables for ingress security.
+
+Flow 5 matches packets from hairpin connections with `HairpinCTMark` and forwards them to table [ConntrackCommit]
+directly to bypass all tables for ingress security. Refer to this PR
+[#5687](https://github.com/antrea-io/antrea/pull/5687) for more information.
+
+Flow 6 is the table-miss flow.
+
+### AntreaPolicyIngressRule
+
+This table is very similar to table [AntreaPolicyEgressRule] but implements the ingress rules of Antrea-native
+NetworkPolicies. Depending on the tier to which the policy belongs, the rules will be installed in a table corresponding
+to that tier. The ingress table to tier mappings is as follows:
+
+```text
+Antrea-native NetworkPolicy other Tiers -> AntreaPolicyIngressRule
+K8s NetworkPolicy -> IngressRule
+Antrea-native NetworkPolicy Baseline Tier -> IngressDefaultRule
+```
+
+Again for this table, you will need to keep in mind the Antrea-native NetworkPolicy
+[specification](#antrea-native-networkpolicy-implementation) and Antrea-native L7 NetworkPolicy
+[specification](#antrea-native-l7-networkpolicy-implementation) that we are using that we are using. Since these sample
+ingress policies reside in the Application Tier, if you dump the flows for this table, you may see the following:
+
+```text
+1. table=AntreaPolicyIngressRule, priority=64990,ct_state=-new+est,ip actions=goto_table:IngressMetric
+2. table=AntreaPolicyIngressRule, priority=64990,ct_state=-new+rel,ip actions=goto_table:IngressMetric
+3. table=AntreaPolicyIngressRule, priority=14500,reg1=0x7 actions=conjunction(14,2/3)
+4. table=AntreaPolicyIngressRule, priority=14500,ip,nw_src=10.10.0.26 actions=conjunction(14,1/3)
+5. table=AntreaPolicyIngressRule, priority=14500,tcp,tp_dst=8080 actions=conjunction(14,3/3)
+6. table=AntreaPolicyIngressRule, priority=14500,conj_id=14,ip actions=set_field:0xd->reg6,ct(commit,table=IngressMetric,zone=65520,exec(set_field:0xd/0xffffffff->ct_label,set_field:0x80/0x80->ct_mark,set_field:0x20000000000000000/0xfff0000000000000000->ct_label))
+7. table=AntreaPolicyIngressRule, priority=14600,ip,nw_src=10.10.0.26 actions=conjunction(6,1/3)
+8. table=AntreaPolicyIngressRule, priority=14600,reg1=0x25 actions=conjunction(6,2/3)
+9. table=AntreaPolicyIngressRule, priority=14600,tcp,tp_dst=80 actions=conjunction(6,3/3)
+10. table=AntreaPolicyIngressRule, priority=14600,conj_id=6,ip actions=set_field:0x6->reg6,ct(commit,table=IngressMetric,zone=65520,exec(set_field:0x6/0xffffffff->ct_label))
+11. table=AntreaPolicyIngressRule, priority=14600,ip actions=conjunction(4,1/2)
+12. table=AntreaPolicyIngressRule, priority=14599,reg1=0x25 actions=conjunction(4,2/2)
+13. table=AntreaPolicyIngressRule, priority=14599,conj_id=4 actions=set_field:0x4->reg3,set_field:0x400/0x400->reg0,goto_table:IngressMetric
+14. table=AntreaPolicyIngressRule, priority=0 actions=goto_table:IngressRule
+```
+
+Flows 1-2, which are installed by default with the highest priority, match non-new and "tracked" packets and
+forward them to table [IngressMetric] to bypass the check from egress rules. This means that if a connection is
+established, its packets go straight to table [IngressMetric], with no other match required. In particular, this ensures
+that reply traffic is never dropped because of an Antrea-native NetworkPolicy or K8s NetworkPolicy rule. However, this
+also means that ongoing connections are not affected if the Antrea-native NetworkPolicy or the K8s NetworkPolicy is
+updated.
+
+Similar to table [AntreaPolicyEgressRule], the priorities of flows 3-13 installed for the ingress rules are decided by
+the following:
+
+- The `spec.tier` value in an Antrea-native NetworkPolicy determines the primary level for flow priority.
+- The `spec.priority` value in an Antrea-native NetworkPolicy determines the secondary level for flow priority within
+ the same `spec.tier`. A lower value in this field corresponds to a higher priority for the flow.
+- The rule's position within an Antrea-native NetworkPolicy also influences flow priority. Rules positioned closer to
+ the beginning have higher priority for the flow.
+
+Flows 3-6, whose priories are all 14500, are installed for the egress rule `AllowFromClientL7` in the sample policy.
+These flows are described as follows:
+
+- Flow 3 is used to match packets with the source IP address in set {10.10.0.26}, which has all IP addresses of the
+ Pods selected by the label `app: client`, constituting the first dimension for `cojunction` with `conj_id` 14.
+- Flow 4 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods selected
+ by the label `app: web`, constituting the second dimension for `conjunction` with `conj_id` 14.
+- Flow 5 is used to match packets with the destination TCP port in set {8080} specified in the rule, constituting the
+ third dimension for `conjunction` with `conj_id` 14.
+- Flow 6 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 14 and forward them
+ to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel` consumed in table [IngressMetric].
+ Additionally, for the L7 protocol:
+ - `L7NPRedirectCTMark` is persisted, indicating the packets should be redirected to an application-aware engine to
+ be filtered according to L7 rules, such as method `GET` and path `/api/v2/*` in the sample policy.
+ - A VLAN ID allocated for the Antrea-native L7 NetworkPolicy is persisted in `L7NPRuleVlanIDCTLabel`, which will be
+ consumed in table [Output].
+
+Flows 7-11, whose priorities are 14600, are installed for the egress rule `AllowFromClient` in the sample policy.
+These flows are described as follows:
+
+- Flow 7 is used to match packets with the source IP address in set {10.10.0.26}, which has all IP addresses of the Pods
+ selected by the label `app: client`, constituting the first dimension for `cojunction` with `conj_id` 6.
+- Flow 8 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods selected
+ by the label `app: web`, constituting the second dimension for `conjunction` with `conj_id` 6.
+- Flow 9 is used to match packets with the destination TCP port in set {80} specified in the rule, constituting the
+ third dimension for `conjunction` with `conj_id` 6.
+- Flow 10 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 6 and forward
+ them to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel` consumed in table [IngressMetric].
+
+Flows 11-13, whose priorities are all 14599, are installed for the egress rule with a `Drop` action defined after the
+rule `AllowFromClient` in the sample policy, serves as a default rule. Unlike the default of K8s NetworkPolicy,
+Antrea-native NetworkPolicy has no default rule, and all rules should be explicitly defined. Hence, they are evaluated
+as-is, and there is no need for a table [AntreaPolicyIngressDefaultRule]. These flows are described as follows:
+
+- Flow 11 is used to match any IP packets, constituting the second dimension for `conjunction` with `conj_id` 4. This
+ flow, which matches all IP packets, exists because we need at least 2 dimensions for a conjunctive match.
+- Flow 12 is used to match packets with the output OVS port in set {0x25}, which has all the ports of the Pods
+ selected by the label `app: web`, constituting the first dimension for `conjunction` with `conj_id` 4.
+- Flow 13 is used to match packets meeting both dimensions of `conjunction` with `conj_id` 4. `APDenyRegMark` that
+ will be consumed in table [IngressMetric] to which the packets are forwarded is loaded.
+
+Flow 14 is the table-miss flow to forward packets not matched by other flows to table [IngressMetric].
+
+### IngressRule
+
+This table is very similar to table [EgressRule] but implements ingress rules for K8s NetworkPolicies. Once again, you
+will need to keep in mind the K8s NetworkPolicy [specification](#kubernetes-networkpolicy-implementation) that we are
+using.
+
+If you dump the flows of this table, you should see something like this:
+
+```text
+1. table=IngressRule, priority=200,ip,nw_src=10.10.0.26 actions=conjunction(3,1/3)
+2. table=IngressRule, priority=200,reg1=0x25 actions=conjunction(3,2/3)
+3. table=IngressRule, priority=200,tcp,tp_dst=80 actions=conjunction(3,3/3)
+4. table=IngressRule, priority=190,conj_id=3,ip actions=set_field:0x3->reg6,ct(commit,table=IngressMetric,zone=65520,exec(set_field:0x3/0xffffffff->ct_label))
+5. table=IngressRule, priority=0 actions=goto_table:IngressDefaultRule
+```
+
+Flows 1-4 are installed for the ingress rule in the sample K8s NetworkPolicy. These flows are described as follows:
+
+- Flow 1 is used to match packets with the source IP address in set {10.10.0.26}, which is from the Pods selected
+ by the label `app: client` in the `default` Namespace, constituting the first dimension for `conjunction` with `conj_id` 3.
+- Flow 2 is used to match packets with the output port OVS in set {0x25}, which has all ports of the Pods selected
+ by the label `app: web` in the `default` Namespace, constituting the second dimension for `conjunction` with `conj_id` 3.
+- Flow 3 is used to match packets with the destination TCP port in set {80} specified in the rule, constituting
+ the third dimension for `conjunction` with `conj_id` 3.
+- Flow 4 is used to match packets meeting all the three dimensions of `conjunction` with `conj_id` 3 and forward
+ them to table [IngressMetric], persisting `conj_id` to `IngressRuleCTLabel`.
+
+Flow 5 is the table-miss flow to forward packets not matched by other flows to table [IngressDefaultRule].
+
+### IngressDefaultRule
+
+This table is similar in its purpose to table [EgressDefaultRule], and it complements table [IngressRule] for K8s
+NetworkPolicy ingress rule implementation. In Kubernetes, when a NetworkPolicy is applied to a set of Pods, then the default
+behavior for ingress connections for these Pods becomes "deny" (they become [isolated
+Pods](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)). This
+table is in charge of dropping traffic destined for Pods to which a NetworkPolicy (with an ingress rule) is applied,
+and which did not match any of the "allow" list rules.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=IngressDefaultRule, priority=200,reg1=0x25 actions=drop
+2. table=IngressDefaultRule, priority=0 actions=goto_table:IngressMetric
+```
+
+Flow 1, based on our sample K8s NetworkPolicy, is to drop traffic destined for OVS port 0x25, the port number associated
+with a Pod selected by the label `app: web`.
+
+Flow 2 is the table-miss flow to forward packets to table [IngressMetric].
+
+This table is also used to implement Antrea-native NetworkPolicy ingress rules created in the Baseline Tier.
+Since the Baseline Tier is meant to be enforced after K8s NetworkPolicies, the corresponding flows will be created at a
+lower priority than K8s NetworkPolicy default drop flows. These flows are similar to flows 3-9 in table
+[AntreaPolicyIngressRule].
+
+### IngressMetric
+
+This table is very similar to table [EgressMetric], but used to collect ingress metrics for Antrea-native NetworkPolicies.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=IngressMetric, priority=200,ct_state=+new,ct_label=0x3/0xffffffff,ip actions=goto_table:ConntrackCommit
+2. table=IngressMetric, priority=200,ct_state=-new,ct_label=0x3/0xffffffff,ip actions=goto_table:ConntrackCommit
+3. table=IngressMetric, priority=200,ct_state=+new,ct_label=0x6/0xffffffff,ip actions=goto_table:ConntrackCommit
+4. table=IngressMetric, priority=200,ct_state=-new,ct_label=0x6/0xffffffff,ip actions=goto_table:ConntrackCommit
+5. table=IngressMetric, priority=200,reg0=0x400/0x400,reg3=0x4 actions=drop
+6. table=IngressMetric, priority=0 actions=goto_table:ConntrackCommit
+```
+
+Flows 1-2, matching packets with `IngressRuleCTLabel` set to 3 (the `conj_id` allocated for the sample K8s NetworkPolicy
+ingress rule and loaded in table [IngressRule] flow 4), are used to collect metrics for the ingress rule.
+
+Flows 3-4, matching packets with `IngressRuleCTLabel` set to 6 (the `conj_id` allocated for the sample Antrea-native
+NetworkPolicy ingress rule and loaded in table [AntreaPolicyIngressRule] flow 10), are used to collect metrics for the
+ingress rule.
+
+Flow 5 is the drop rule for the sample Antrea-native NetworkPolicy ingress rule. It drops the packets by matching
+`APDenyRegMark` loaded in table [AntreaPolicyIngressRule] flow 13 and `APConjIDField` set to 4 which is the `conj_id`
+allocated for the ingress rule and loaded in table [AntreaPolicyIngressRule] flow 13.
+
+Flow 6 is the table-miss flow.
+
+### ConntrackCommit
+
+This table is in charge of committing non-Service connections in `CtZone`.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=ConntrackCommit, priority=200,ct_state=+new+trk-snat,ct_mark=0/0x10,ip actions=ct(commit,table=Output,zone=65520,exec(move:NXM_NX_REG0[0..3]->NXM_NX_CT_MARK[0..3]))
+2. table=ConntrackCommit, priority=0 actions=goto_table:Output
+```
+
+Flow 1 is designed to match the first packet of non-Service connections with the "tracked" state and `NotServiceCTMark`.
+Then it commits the relevant connections in `CtZone`, persisting the value of `PktSourceField` to
+`ConnSourceCTMarkField`, and forwards the packets to table [Output].
+
+Flow 2 is the table-miss flow.
+
+### Output
+
+This is the final table in the pipeline, responsible for handling the output of packets from OVS. It addresses the
+following cases:
+
+1. Output packets to an application-aware engine for further L7 protocol processing.
+2. Output packets to a target port and a mirroring port defined in a TrafficControl CR with `Mirror` action.
+3. Output packets to a port defined in a TrafficControl CR with `Redirect` action.
+4. Output packets from hairpin connections to the ingress port where they were received.
+5. Output packets to a target port.
+6. Output packets to the OpenFlow controller (Antrea Agent).
+7. Drop packets.
+
+If you dump the flows of this table, you may see the following:
+
+```text
+1. table=Output, priority=212,ct_mark=0x80/0x80,reg0=0x200000/0x600000 actions=push_vlan:0x8100,move:NXM_NX_CT_LABEL[64..75]->OXM_OF_VLAN_VID[],output:"antrea-l7-tap0"
+2. table=Output, priority=211,reg0=0x200000/0x600000,reg4=0x400000/0xc00000 actions=output:NXM_NX_REG1[],output:NXM_NX_REG9[]
+3. table=Output, priority=211,reg0=0x200000/0x600000,reg4=0x800000/0xc00000 actions=output:NXM_NX_REG9[]
+4. table=Output, priority=210,ct_mark=0x40/0x40 actions=IN_PORT
+5. table=Output, priority=200,reg0=0x200000/0x600000 actions=output:NXM_NX_REG1[]
+6. table=Output, priority=200,reg0=0x2400000/0xfe600000 actions=meter:256,controller(reason=no_match,id=62373,userdata=01.01)
+7. table=Output, priority=200,reg0=0x4400000/0xfe600000 actions=meter:256,controller(reason=no_match,id=62373,userdata=01.02)
+8. table=Output, priority=0 actions=drop
+```
+
+Flow 1 is for case 1. It matches packets with `L7NPRedirectCTMark` and `OutputToOFPortRegMark`, and then outputs them to
+the port `antrea-l7-tap0` specifically created for connecting to an application-aware engine. Notably, these packets are pushed
+with an 802.1Q header and loaded with the VLAN ID value persisted in `L7NPRuleVlanIDCTLabel` before being output, due to
+the implementation of Antrea-native L7 NetworkPolicy. The application-aware engine enforcing L7 policies (e.g., Suricata)
+can leverage the VLAN ID to determine which set of rules to apply to the packet.
+
+Flow 2 is for case 2. It matches packets with `TrafficControlMirrorRegMark` and `OutputToOFPortRegMark`, and then
+outputs them to the port specified in `TargetOFPortField` and the port specified in `TrafficControlTargetOFPortField`.
+Unlike the `Redirect` action, the `Mirror` action creates an additional copy of the packet.
+
+Flow 3 is for case 3. It matches packets with `TrafficControlRedirectRegMark` and `OutputToOFPortRegMark`, and then
+outputs them to the port specified in `TrafficControlTargetOFPortField`.
+
+Flow 4 is for case 4. It matches packets from hairpin connections by matching `HairpinCTMark` and outputs them back to the
+port where they were received.
+
+Flow 5 is for case 5. It matches packets by matching `OutputToOFPortRegMark` and outputs them to the OVS port specified by
+the value stored in `TargetOFPortField`.
+
+Flows 6-7 are for case 6. They match packets by matching `OutputToControllerRegMark` and the value stored in
+`PacketInOperationField`, then output them to the OpenFlow controller (Antrea Agent) with corresponding user data.
+
+In practice, you will see additional flows similar to these ones to accommodate different scenarios (different
+PacketInOperationField values). Note that packets sent to controller are metered to avoid overrunning the antrea-agent
+and using too many resources.
+
+Flow 8 is the table-miss flow for case 7. It drops packets that do not match any of the flows in this table.
+
+[ARPSpoofGuard]: #arpspoofguard
+[AntreaPolicyEgressRule]: #antreapolicyegressrule
+[AntreaPolicyIngressRule]: #antreapolicyingressrule
+[Classifier]: #classifier
+[ClusterIP without Endpoint]: #clusterip-without-endpoint
+[ClusterIP]: #clusterip
+[ConntrackCommit]: #conntrackcommit
+[ConntrackState]: #conntrackstate
+[ConntrackZone]: #conntrackzone
+[Ct Labels]: #ovs-ct-label
+[Ct Marks]: #ovs-ct-mark
+[Ct Zones]: #ovs-ct-zone
+[EgressDefaultRule]: #egressdefaultrule
+[EgressMark]: #egressmark
+[EgressMetric]: #egressmetric
+[EgressRule]: #egressrule
+[Egress egress-client]: #egress-applied-to-client-pods
+[Egress egress-web]: #egress-applied-to-web-pods
+[EndpointDNAT]: #endpointdnat
+[IngressDefaultRule]: #ingressdefaultrule
+[IngressMetric]: #ingressmetric
+[IngressRule]: #ingressrule
+[L2ForwardingCalc]: #l2forwardingcalc
+[L3DecTTL]: #l3decttl
+[L3Forwarding]: #l3forwarding
+[LoadBalancer]: #loadbalancer
+[NodePort]: #nodeport
+[NodePortMark]: #nodeportmark
+[OVS Registers]: #ovs-registers
+[Output]: #output
+[PreRoutingClassifier]: #preroutingclassifier
+[SNATMark]: #snatmark
+[SNAT]: #snat
+[Service with ExternalIP]: #service-with-externalip
+[Service with ExternalTrafficPolicy Local]: #service-with-externaltrafficpolicy-local
+[Service with session affinity]: #service-with-session-affinity
+[ServiceLB]: #servicelb
+[SessionAffinity]: #sessionaffinity
+[SpoofGuard]: #spoofguard
+[TrafficControl]: #trafficcontrol
+[TrafficControl mirror-db-to-local]: #trafficcontrol-for-packet-mirroring
+[TrafficControl redirect-web-to-local]: #trafficcontrol-for-packet-redirecting
+[UnSNAT]: #unsnat
diff --git a/content/docs/v2.2.0-alpha.2/docs/design/policy-only.md b/content/docs/v2.2.0-alpha.2/docs/design/policy-only.md
new file mode 100644
index 00000000..228b52ea
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/design/policy-only.md
@@ -0,0 +1,54 @@
+# Running Antrea in `networkPolicyOnly` Mode
+
+Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea
+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the
+primary CNI.
+
+## Design
+
+Antrea is designed to work as NetworkPolicy plug-in to work together with a routed CNIs.
+For as long as a CNI implementation fits into this model, Antrea may be inserted to enforce
+NetworkPolicy in that CNI's environment using Open vSwitch(OVS).
+
+In addition, Antrea working as NetworkPolicy plug-in automatically enables Antrea-proxy, because
+it requires Antrea-proxy to load balance Pod-to-Service traffic.
+
+{{< img src="../assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI" >}}
+
+The above diagram depicts a routed CNI network topology on the left, and what it looks like
+after Antrea inserts the OVS bridge into the data path.
+
+The diagram on the left illustrates a routed CNI network topology such as AWS EKS.
+In this topology a Pod connects to the host network via a
+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a
+host route with corresponding Pod's IP address as destination is created on each PtP device. Within
+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and
+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod
+traffic, even within the same worker Node must traverse first to the host network and be
+routed by it.
+
+When the container runtime instantiates a Pod, it first calls the primary CNI to configure Pod's
+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a
+veth-pair. When Antrea is chained with this primary CNI, container runtime then calls
+Antrea Agent, and the Antrea Agent attaches Pod's PtP device to the OVS bridge, and moves the host
+route to the Pod to local host gateway(`antrea-gw0`) interface from the PtP device. This is
+illustrated by the diagram on the right.
+
+Antrea needs to satisfy that
+
+1. All IP packets, sent on ``antrea-gw0`` in the host network, are received by the Pods exactly the same
+as if the OVS bridge had not been inserted.
+1. All IP packets, sent by Pods, are received by other Pods or the host network exactly
+the same as if OVS bridge had not been inserted.
+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.
+
+To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor
+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:
+
+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod can
+resolve its neighbors, and the Pod therefore can generate traffic to these neighbors.
+1. A L3 flow for each local Pod that routes IP packets to that Pod if packets' destination IP
+ matches that of the Pod.
+1. A L3 flow that routes all other IP packets to host network via `antrea-gw0` interface.
+
+These flows together handle all Pod traffic patterns.
diff --git a/content/docs/v2.2.0-alpha.2/docs/design/windows-design.md b/content/docs/v2.2.0-alpha.2/docs/design/windows-design.md
new file mode 100644
index 00000000..5191e72a
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/design/windows-design.md
@@ -0,0 +1,271 @@
+# Running Antrea on Windows
+
+Antrea supports running on Windows worker Nodes. On Windows Nodes, Antrea sets up an overlay
+network to forward packets between Nodes and implements NetworkPolicies.
+
+## Design
+
+On Windows, the Host Networking Service (HNS) is a necessary component to support container
+networking. For Antrea on Windows, "Transparent" mode is chosen for the HNS network. In this
+mode, containers will be directly connected to the physical network through an **external**
+Hyper-V switch.
+
+OVS is working as a forwarding extension for the external Hyper-V switch which was created by
+HNS. Hence, the packets that are sent from/to the containers can be processed by OVS.
+The network adapter used in the HNS Network is also added to the OVS bridge as the uplink
+interface. An internal interface for the OVS bridge is created, and the original networking
+configuration (e.g., IP, MAC and routing entries) on the host network adapter is moved to
+this new interface. Some extra OpenFlow entries are needed to ensure the host traffic can be
+forwarded correctly.
+
+{{< img src="../assets/hns_integration.svg" width="600" alt="HNS Integration" >}}
+
+Windows NetNat is configured to make sure the Pods can access external addresses. The packet
+from a Pod to an external address is firstly output to antrea-gw0, and then SNAT is performed on the
+Windows host. The SNATed packet enters OVS from the OVS bridge interface and leaves the Windows host
+from the uplink interface directly.
+
+Antrea implements the Kubernetes ClusterIP Service leveraging OVS. Pod-to-ClusterIP-Service traffic
+is load-balanced and forwarded directly inside the OVS pipeline. And kube-proxy is running
+on each Windows Node to implement Kubernetes NodePort Service. Kube-proxy captures NodePort Service
+traffic and sets up a connection to a backend Pod to forwards the request using this connection.
+The forwarded request enters the OVS pipeline through "antrea-gw0" and is then forwarded to the
+Pod. To be compatible with OVS, kube-proxy on Windows must be configured to run in **userspace**
+mode, and a specific network adapter is required, on which Service IP addresses will be configured
+by kube-proxy.
+
+### HNS Network configuration
+
+HNS Network is created during the Antrea Agent initialization phase, and it should be created before
+the OVS bridge is created. This is because OVS is working as the Hyper-V Switch Extension, and the
+ovs-vswitchd process cannot work correctly until the OVS Extension is enabled on the newly created
+Hyper-V Switch.
+
+When creating the HNS Network, the local subnet CIDR and the uplink network adapter are required.
+Antrea Agent finds the network adapter from the Windows host using the Node's internal IP as a filter,
+and retrieves the local Subnet CIDR from the Node spec.
+
+After the HNS Network is created, OVS extension should be enabled at once on the Hyper-V Switch.
+
+### Container network configuration
+
+[**host-local**](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local)
+plugin is used to provide IPAM for containers, and the address is allocated from the subnet CIDR
+configured on the HNS Network.
+
+Windows HNS Endpoint is leveraged as the vNIC for each container. A single HNS Endpoint with the
+IP allocated by the IPAM plugin is created for each Pod. The HNS Endpoint should be attached to all
+containers in the same Pod to ensure that the network configuration can be correctly accessed (this
+operation is to make sure the DNS configuration is readable from all containers).
+
+One OVS internal port with the same name as the HNS Endpoint is also needed, in order to handle
+container traffic with OpenFlow rules. OpenFlow entries are installed to implement Pod-to-Pod,
+Pod-to-external and Pod-to-ClusterIP-Service connectivity.
+
+CNIAdd request might be called multiple times for a given Pod. This is because kubelet on Windows
+assumes CNIAdd is an idempotent event, and it uses this event to query the Pod networking status.
+Antrea needs to identify the container type (sandbox or workload) from the CNIAdd request:
+
+* we create the HNS Endpoint only when the request is for the sandbox container
+* we attach the HNS Endpoint no matter whether it is a sandbox container or a workload container.
+
+### Gateway port configuration
+
+The gateway port is created during the Antrea Agent initialization phase, and the address of the interface
+should be the first IP in the subnet. The port is an OVS internal port and its default name is "antrea-gw0".
+
+The gateway port is used to help implement L3 connectivity for the containers, including Pod-to-external,
+and Node-to-Pod. For the Pod-to-external case, OpenFlow entries are
+installed in order to output these packets to the host on the gateway port. To ensure the packet is forwarded
+correctly on the host, the IP-Forwarding feature should be enabled on the network adapter of the
+gateway port.
+
+A routing entry for traffic from the Node to the local Pod subnet is needed on the Windows host to ensure
+that the packet can enter the OVS pipeline on the gateway port. This routing entry is added when "antrea-gw0"
+is created.
+
+Every time a new Node joins the cluster, a host routing entry on the gateway port is required, and the
+remote subnet CIDR should be routed with the remote gateway address as the nexthop.
+
+### Tunnel port configuration
+
+Tunnel port configuration should be similar to Antrea on Linux:
+
+* tunnel port is added after OVS bridge is created;
+* a flow-based tunnel with the appropriate remote address is created for each Node in the cluster with OpenFlow.
+
+The only difference with Antrea on Linux is that the tunnel local address is required when creating the tunnel
+port (provided with `local_ip` option). This local address is the one configured on the OVS bridge.
+
+### OVS bridge interface configuration
+
+Since OVS is also responsible for taking charge of the network of the host, an interface for the OVS bridge
+is required on which the host network settings are configured. The virtual network adapter which is created
+when creating the HNS Network is used as the OVS bridge interface. The virtual network adapter is renamed as
+the expected OVS bridge name, then the OVS bridge port is created. Hence, OVS can find the virtual network
+adapter with the name and attach it directly. Windows host has configured the virtual network adapter with
+IP, MAC and route entries which were originally on the uplink interface when creating the HNSNetwork, as a
+result, no extra manual IP/MAC/Route configurations on the OVS bridge are needed.
+
+The packets that are sent to/from the Windows host should be forwarded on this interface. So the OVS bridge
+is also a valid entry point into the OVS pipeline. A special ofport number 65534 (named as LOCAL) for the
+OVS bridge is used in OpenFlow spec.
+
+In the OVS `Classifier` table, new OpenFlow entries are needed to match the packets from this interface. The
+packet entering OVS from this interface is output to the uplink interface directly.
+
+### OVS uplink interface configuration
+
+After the OVS bridge is created, the original physical adapter is added to the OVS bridge as the uplink interface.
+The uplink interface is used to support traffic from Pods accessing the world outside current host.
+
+We should differentiate the traffic if it is entering OVS from the uplink interface in OVS `Classifier`
+table. In encap mode, the packets entering OVS from the uplink interface is output to the bridge interface directly.
+In noEncap mode, there are three kinds of packets entering OVS from the uplink interface:
+
+ 1) Traffic that is sent to local Pods from Pod on a different Node
+ 2) Traffic that is sent to local Pods from a different Node according to the routing configuration
+ 3) Traffic on the host network
+
+For 1 and 2, the packet enters the OVS pipeline, and the `macRewriteMark` is set to ensure the destination MAC can be
+modified.
+For 3, the packet is output to the OVS bridge interface directly.
+
+The packet is always output to the uplink interface if it is entering OVS from the bridge interface. We
+also output the Pod traffic to the uplink interface in noEncap mode, if the destination is a Pod on a different Node,
+or if it is a reply packet to the request which is sent from a different Node. Then we can reduce the cost that the
+packet enters OVS twice (OVS -> Windows host -> OVS).
+
+Following are the OpenFlow entries for uplink interface in encap mode.
+
+```text
+Classifier Table: 0
+table=0, priority=200, in_port=$uplink actions=LOCAL
+table=0, priority=200, in_port=LOCAL actions=output:$uplink
+```
+
+Following is an example for the OpenFlow entries related with uplink interface in noEncap mode.
+
+```text
+Classifier Table: 0
+table=0, priority=210, ip, in_port=$uplink, nw_dst=$localPodSubnet, actions=load:0x4->NXM_NX_REG0[0..15],load:0x
+1->NXM_NX_REG0[19],resubmit(,29)
+table=0, priority=200, in_port=$uplink actions=LOCAL
+table=0, priority=200, in_port=LOCAL actions=output:$uplink
+
+L3Forwarding Table: 70
+// Rewrite the destination MAC with the Node's MAC on which target Pod is located.
+table=70, priority=200,ip,nw_dst=$peerPodSubnet actions=mod_dl_dst:$peerNodeMAC,resubmit(,80)
+// Rewrite the destination MAC with the Node's MAC if it is a reply for the access from the Node.
+table=70, priority=200,ct_state=+rpl+trk,ip,nw_dst=$peerNodeIP actions=mod_dl_dst:$peerNodeMAC,resubmit(,80)
+
+L2ForwardingCalcTable: 80
+table=80, priority=200,dl_dst=$peerNodeMAC actions=load:$uplink->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,resubmit(,105)
+```
+
+### SNAT configuration
+
+SNAT is an important feature of the Antrea Agent on Windows Nodes, required to support Pods accessing external
+addresses. It is implemented using the NAT capability of the Windows host.
+
+To support this feature, we configure NetNat on the Windows host for the Pod subnet:
+
+```text
+New-NetNat -Name antrea-nat -InternalIPInterfaceAddressPrefix $localPodSubnet
+```
+
+The packet that is sent from local Pod to an external address leaves OVS from `antrea-gw0` and enters Windows host,
+and SNAT action is performed. The SNATed address is chosen by Windows host according to the routing configuration.
+As for the reply packet of the Pod-to-external traffic, it enters Windows host and performs de-SNAT first, and then
+the packet enters OVS from `antrea-gw0` and is forwarded to the Pod finally.
+
+### Using Windows named pipe for internal connections
+
+Named pipe is used for local connections on Windows Nodes instead of Unix Domain Socket (UDS). It is used in
+these scenarios:
+
+* OVSDB connection
+* OpenFlow connection
+* The connection between CNI plugin and CNI server
+
+## Antrea and OVS Management on Windows
+
+While we provide different installation methods for Windows, the recommended one
+is to use the `antrea-windows-with-ovs.yml` manifest. With this method, the
+antrea-agent process and the OVS daemons (ovsdb-server and ovs-vswitchd) run as
+a Pod on Windows worker Nodes, and are managed by a DaemonSet. This installation
+method relies on [Windows HostProcess Pod](https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/)
+support.
+
+## Traffic walkthrough
+
+### Pod-to-Pod Traffic
+
+The intra-Node Pod-to-Pod traffic and inter-Node Pod-to-Pod traffic are the same as Antrea on Linux.
+It is processed and forwarded by OVS, and controlled with OpenFlow entries.
+
+### Service Traffic
+
+Kube-proxy userspace mode is configured to provide NodePort Service function. A specific Network adapter named
+"HNS Internal NIC" is provided to kube-proxy to configure Service addresses. The OpenFlow entries for the
+NodePort Service traffic on Windows are the same as those on Linux.
+
+Antrea Proxy implements the ClusterIP Service function. Antrea Agent installs routes to send ClusterIP Service
+traffic from host network to the OVS bridge. For each Service, it adds a route that routes the traffic via a
+virtual IP (169.254.0.253), and it also adds a route to indicate that the virtual IP is reachable via
+antrea-gw0. The reason to add a virtual IP, rather than routing the traffic directly to antrea-gw0, is that
+then only one neighbor cache entry needs to be added, which resolves the virtual IP to a virtual MAC.
+
+When a Service's endpoints are in hostNetwork or external network, a request packet will have its
+destination IP DNAT'd to the selected endpoint IP and its source IP will be SNAT'd to the
+virtual IP (169.254.0.253). Such SNAT is needed for sending the reply packets back to the OVS pipeline
+from the host network, whose destination IP was the Node IP before d-SNATed to the virtual IP.
+Check the packet forwarding path described below.
+
+For a request packet from host, it will enter OVS pipeline via antrea-gw0 and exit via antrea-gw0
+as well to host network. On Windows host, with the help of NetNat, the request packet's source IP will
+be SNAT'd again to Node IP.
+
+The reply packets are the reverse for both situations regardless of whether the endpoint is in
+ClusterCIDR or not.
+
+The following path is an example of host accessing a Service whose endpoint is a hostNetwork Pod on
+another Node. The request packet is like:
+
+```text
+host -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host NetNat -> br-int -> OVS pipeline -> peer Node
+ | |
+ DNAT(peer Node IP) SNAT(Node IP)
+ SNAT(virtual IP)
+```
+
+The forwarding path of a reply packet is like:
+
+```text
+peer Node -> OVS pipeline -> br-int -> host NetNat -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host
+ | |
+ d-SNAT(virtual IP) d-SNAT(antrea-gw0 IP)
+ d-DNAT(Service IP)
+```
+
+### External Traffic
+
+The Pod-to-external traffic leaves the OVS pipeline from the gateway interface, and then is SNATed on the Windows
+host. If the packet should leave Windows host from OVS uplink interface according to the routing configuration on
+the Windows host, it is forwarded to OVS bridge first on which the host IP is configured, and then output to the
+uplink interface by OVS pipeline.
+
+The corresponding reply traffic will enter OVS from the uplink interface first, and then enter the host from the
+OVS bridge interface. It is de-SNATed on the host and then back to OVS from `antre-gw0` and forwarded to the Pod
+finally.
+{{< img src="../assets/windows_external_traffic.svg" width="600" alt="Traffic to external" >}}
+
+### Host Traffic
+
+In "Transparent" mode, the Antrea Agent should also support the host traffic when necessary, which includes
+packets sent from the host to external addresses, and the ones sent from external addresses to the host.
+
+The host traffic enters OVS bridge and output to the uplink interface if the destination is reachable from the
+network adapter which is plugged on OVS as uplink. For the reverse path, the packet enters OVS from the uplink
+interface first, and then directly output to the bridge interface and enters Windows host. For the traffic that
+is connected to the Windows network adapters other than the OVS uplink interface, it is managed by Windows host.
diff --git a/content/docs/v2.2.0-alpha.2/docs/egress.md b/content/docs/v2.2.0-alpha.2/docs/egress.md
new file mode 100644
index 00000000..3e7ac817
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/egress.md
@@ -0,0 +1,505 @@
+# Egress
+
+## Table of Contents
+
+
+- [What is Egress?](#what-is-egress)
+- [Prerequisites](#prerequisites)
+- [The Egress resource](#the-egress-resource)
+ - [AppliedTo](#appliedto)
+ - [EgressIP](#egressip)
+ - [ExternalIPPool](#externalippool)
+ - [Bandwidth](#bandwidth)
+- [The ExternalIPPool resource](#the-externalippool-resource)
+ - [IPRanges](#ipranges)
+ - [SubnetInfo](#subnetinfo)
+ - [NodeSelector](#nodeselector)
+- [Usage examples](#usage-examples)
+ - [Configuring High-Availability Egress](#configuring-high-availability-egress)
+ - [Configuring static Egress](#configuring-static-egress)
+- [Configuration options](#configuration-options)
+- [Egress on Cloud](#egress-on-cloud)
+ - [AWS](#aws)
+- [Limitations](#limitations)
+- [Known issues](#known-issues)
+
+
+## What is Egress?
+
+`Egress` is a CRD API that manages external access from the Pods in a cluster.
+It supports specifying which egress (SNAT) IP the traffic from the selected Pods
+to the external network should use. When a selected Pod accesses the external
+network, the egress traffic will be tunneled to the Node that hosts the egress
+IP if it's different from the Node that the Pod runs on and will be SNATed to
+the egress IP when leaving that Node.
+
+You may be interested in using this capability if any of the following apply:
+
+- A consistent IP address is desired when specific Pods connect to services
+ outside of the cluster, for source tracing in audit logs, or for filtering
+ by source IP in external firewall, etc.
+
+- You want to force outgoing external connections to leave the cluster via
+ certain Nodes, for security controls, or due to network topology restrictions.
+
+This guide demonstrates how to configure `Egress` to achieve the above result.
+
+## Prerequisites
+
+Egress was introduced in v1.0 as an alpha feature, and was graduated to beta in
+v1.6, at which time it was enabled by default. Prior to v1.6, a feature gate,
+`Egress` must be enabled on the antrea-controller and antrea-agent in the
+`antrea-config` ConfigMap like the following options for the feature to work:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Egress: true
+ antrea-controller.conf: |
+ featureGates:
+ Egress: true
+```
+
+## The Egress resource
+
+A typical Egress resource example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ podSelector:
+ matchLabels:
+ role: web
+ egressIP: 10.10.0.8 # can be populated by Antrea after assigning an IP from the pool below
+ externalIPPool: prod-external-ip-pool
+status:
+ egressNode: node01
+```
+
+### AppliedTo
+
+The `appliedTo` field specifies the grouping criteria of Pods to which the
+Egress applies to. Pods can be selected cluster-wide using `podSelector`. If set
+with a `namespaceSelector`, all Pods from Namespaces selected by the
+`namespaceSelector` will be selected. Specific Pods from specific Namespaces can
+be selected by providing both a `podSelector` and a `namespaceSelector`. Empty
+`appliedTo` selects nothing. The field is mandatory.
+
+### EgressIP
+
+The `egressIP` field specifies the egress (SNAT) IP the traffic from the
+selected Pods to the external network should use. **The IP must be reachable
+from all Nodes.** The IP can be specified when creating the Egress. Starting
+with Antrea v1.2, it can be allocated from an `ExternalIPPool` automatically.
+
+- If `egressIP` is not specified, `externalIPPool` must be specified. An IP will
+ be allocated from the pool by the antrea-controller. The IP will be assigned
+ to a Node selected by the `nodeSelector` of the `externalIPPool` automatically.
+- If both `egressIP` and `externalIPPool` are specified, the IP must be in the
+ range of the pool. Similarly, the IP will be assigned to a Node selected by
+ the `externalIPPool` automatically.
+- If only `egressIP` is specified, Antrea will not manage the assignment of the
+ IP and it must be assigned to an arbitrary interface of one Node manually.
+
+**Starting with Antrea v1.2, high availability is provided automatically when
+the `egressIP` is allocated from an `externalIPPool`**, i.e. when the
+`externalIPPool` is specified. If the Node hosting the `egressIP` fails, another
+Node will be elected (from among the remaining Nodes selected by the
+`nodeSelector` of the `externalIPPool`) as the new egress Node of this Egress.
+It will take over the IP and send layer 2 advertisement (for example, Gratuitous
+ARP for IPv4) to notify the other hosts and routers on the network that the MAC
+address associated with the IP has changed. A dummy interface `antrea-egress0` is
+automatically created on the Node hosting the egress IP, the interface is intended
+to be down and egress traffic will not flow through it but the interface determined
+by the route table.
+
+**Note**: If more than one Egress applies to a Pod and they specify different
+`egressIP`, the effective egress IP will be selected randomly.
+
+### ExternalIPPool
+
+The `externalIPPool` field specifies the name of the `ExternalIPPool` that the
+`egressIP` should be allocated from. It also determines which Nodes the IP can
+be assigned to. It can be empty, which means users should assign the `egressIP`
+to one Node manually.
+
+### Bandwidth
+
+The `bandwidth` field enables traffic shaping for an Egress, by limiting the
+bandwidth for all egress traffic belonging to this Egress. `rate` specifies
+the maximum transmission rate. `burst` specifies the maximum burst size when
+traffic exceeds the rate. The user-provided values for `rate` and `burst` must
+follow the Kubernetes [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/) format,
+e.g. 300k, 100M, 2G. All backend workloads selected by a rate-limited Egress share the
+same bandwidth while sending egress traffic via this Egress. If these limits are exceeded,
+the traffic will be dropped.
+
+**Note**: Traffic shaping is currently in alpha version. To use this feature, users should
+enable the `EgressTrafficShaping` feature gate. Each Egress IP can be applied one bandwidth only.
+If multiple Egresses use the same IP but configure different bandwidths, the effective
+bandwidth will be selected randomly from the set of configured bandwidths. The effective use of the `bandwidth`
+function requires the OVS datapath to support meters.
+
+An Egress with traffic shaping example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ podSelector:
+ matchLabels:
+ role: web
+ egressIP: 10.10.0.8
+ bandwidth:
+ rate: 800M
+ burst: 2G
+status:
+ egressNode: node01
+```
+
+## The ExternalIPPool resource
+
+ExternalIPPool defines one or multiple IP ranges that can be used in the
+external network. The IPs in the pool can be allocated to the Egress resources
+as the Egress IPs. A typical ExternalIPPool resource example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: prod-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ - cidr: 10.10.1.0/28
+ nodeSelector:
+ matchLabels:
+ network-role: egress-gateway
+```
+
+### IPRanges
+
+The `ipRanges` field contains a list of IP ranges representing the available IPs
+of this IP pool. Each IP range may consist of a `cidr` or a pair of `start` and
+`end` IPs (which are themselves included in the range).
+
+When using a CIDR to define an IP range, it is important to keep in mind that
+the first IP in the CIDR will be excluded and will never be allocated. This is
+because when the CIDR represents a traditional subnet, the first IP is typically
+the "network IP". Additionally, for IPv4, the last IP in the CIDR, which
+traditionally represents the "broadcast IP", will also be excluded. As a result,
+providing a /32 CIDR or a /31 CIDR will yield an empty pool of IP addresses. A
+/28 CIDR will yield 14 allocatable IP addresses. In the future we may make this
+behavior configurable, so that the full CIDR can be used if desired.
+
+### SubnetInfo
+
+By default, it's assumed that the IPs allocated from an ExternalIPPool are in
+the same subnet as the Node IPs. Starting with Antrea v1.15, IPs can be
+allocated from a subnet different from the Node IPs.
+
+The optional `subnetInfo` field contains the subnet attributes of the IPs in
+this pool. When using a different subnet:
+
+* `gateway` and `prefixLength` must be set. Antrea will route Egress traffic to
+the specified gateway when the destination is not in the same subnet of the
+Egress IP, otherwise route it to the destination directly.
+
+* Optionally, you can specify `vlan` if the underlying network is expecting it.
+Once set, Antrea will tag Egress traffic leaving the Egress Node with the
+specified VLAN ID. Correspondingly, it's expected that reply traffic towards
+these Egress IPs is also tagged with the specified VLAN ID when arriving at the
+Egress Node.
+
+An example of ExternalIPPool using a non-default subnet is as below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: prod-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ subnetInfo:
+ gateway: 10.10.0.1
+ prefixLength: 24
+ vlan: 10
+ nodeSelector:
+ matchLabels:
+ network-role: egress-gateway
+```
+
+**Note**: Specifying different subnets is currently in alpha version. To use
+this feature, users should enable the `EgressSeparateSubnet` feature gate.
+Currently, the maximum number of different subnets that can be supported in a
+cluster is 20, which should be sufficient for most cases. If you need to have
+more subnets, please raise an issue with your use case, and we will consider
+revising the limit based on that.
+
+### NodeSelector
+
+The `nodeSelector` field specifies which Nodes the IPs in this pool can be
+assigned to. It's useful when you want to limit egress traffic to certain Nodes.
+The semantics of the selector is the same as those used elsewhere in Kubernetes,
+i.e. both `matchLabels` and `matchExpressions` are supported. It can be empty,
+which means all Nodes can be selected.
+
+## Usage examples
+
+### Configuring High-Availability Egress
+
+In this example, we will make web apps in different namespaces use different
+egress IPs to access the external network.
+
+First, create an `ExternalIPPool` with a list of external routable IPs on the
+network.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.11 # 10.10.0.11-10.10.0.20 can be used as Egress IPs
+ end: 10.10.0.20
+ nodeSelector: {} # All Nodes can be Egress Nodes
+```
+
+Then create two `Egress` resources, each of which applies to web apps in one
+Namespace.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ podSelector:
+ matchLabels:
+ app: web
+ externalIPPool: external-ip-pool
+---
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-staging-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: staging
+ podSelector:
+ matchLabels:
+ app: web
+ externalIPPool: external-ip-pool
+```
+
+List the `Egress` resource with kubectl. The output shows each Egress gets one
+IP from the IP pool and gets one Node assigned as its Egress Node.
+
+```yaml
+# kubectl get egress
+NAME EGRESSIP AGE NODE
+egress-prod-web 10.10.0.11 1m node-4
+egress-staging-web 10.10.0.12 1m node-6
+```
+
+Now, the packets from the Pods with label `app=web` in the `prod` Namespace to
+the external network will be redirected to the `node-4` Node and SNATed to
+`10.10.0.11` while the packets from the Pods with label `app=web` in the
+`staging` Namespace to the external network will be redirected to the `node-6`
+Node and SNATed to `10.10.0.12`.
+
+Finally, if the `node-4` Node powers off, `10.10.0.11` will be re-assigned to
+another available Node quickly, and the packets from the Pods with label
+`app=web` in the `prod` Namespace will be redirected to the new Node, minimizing
+egress connection disruption without manual intervention.
+
+### Configuring static Egress
+
+In this example, we will make Pods in different namespaces use specific Node IPs
+(or any IPs that are configured to the interfaces of the Nodes) to access the
+external network.
+
+Since the Egress IPs have been configured to the Nodes, we can create `Egress`
+resources with specific IPs directly.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ egressIP: 10.10.0.104 # node-4's IP
+---
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-staging
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: staging
+ egressIP: 10.10.0.105 # node-5's IP
+```
+
+List the `Egress` resource with kubectl. The output shows `10.10.0.104` is
+discovered on `node-4` Node while `10.10.0.105` is discovered on `node-5`.
+
+```yaml
+# kubectl get egress
+NAME EGRESSIP AGE NODE
+egress-prod 10.10.0.104 1m node-4
+egress-staging 10.10.0.105 1m node-5
+```
+
+Now, the packets from the Pods with in the `prod` Namespace to the external
+network will be redirected to the `node-4` Node and SNATed to `10.10.0.104`
+while the packets from the Pods in the `staging` Namespace to the external
+network will be redirected to the `node-5` Node and SNATed to `10.10.0.105`.
+
+In this configuration, if the `node-4` Node powers off, re-configuring
+`10.10.0.104` to another Node or updating the `egressIP` of `egress-prod` to
+another Node's IP can recover the egress connection. Antrea will detect the
+configuration change and redirect the packets from the Pods in the `prod`
+Namespace to the new Node.
+
+## Configuration options
+
+There are several options that can be configured for Egress according to your
+case.
+
+- `egress.exceptCIDRs` - The CIDR ranges to which outbound Pod traffic will not
+ be SNAT'd by Egresses. The option was added in Antrea v1.4.0.
+- `egress.maxEgressIPsPerNode` - The maximum number of Egress IPs that can be
+ assigned to a Node. It's useful when the Node network restricts the number of
+ secondary IPs a Node can have, e.g. in AWS EC2. The configured value must not
+ be greater than 255. The restriction applies to all Nodes in the cluster. If
+ you want to set different capacities for Nodes, the
+ `node.antrea.io/max-egress-ips` annotation of Node objects can be used to
+ specify different values for different Nodes, taking priority over the value
+ configured in the config file. The option and the annotation were added in
+ Antrea v1.11.0.
+
+## Egress on Cloud
+
+High-Availability Egress requires the Egress IPs to be able to float across
+Nodes. When assigning an Egress IP to a Node, Antrea assumes the responsibility
+of advertising the Egress IPs to the Node network via the ARP or NDP protocols.
+However, cloud networks usually apply SpoofGuard which prevents the Nodes from
+using any IP that is not configured for them in the cloud's control plane, or
+even don't support multicast and broadcast. These restrictions lead to
+High-Availability Egress not being as readily available on some clouds as it is
+on on-premise networks, and some custom (i.e., cloud-specific) work is required
+in the cloud's control plane to assign the Egress IP as secondary Node IPs.
+
+### AWS
+
+In Amazon VPC, ARP packets never hit the network, and traffic with Egress IP as
+source IP or destination IP isn't transmitted arbitrarily unless they are
+explicitly authorized (check [AWS VPC Whitepaper](https://docs.aws.amazon.com/whitepapers/latest/logical-separation/vpc-and-accompanying-features.html)
+for more information). To authorize an Egress IP, it must be configured as the
+secondary IP of the primary network interface of the Egress Node instance. You
+can refer to the [AWS doc](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html#assignIP-existing)
+to assign a secondary IP to a network interface.
+
+If you are using static Egress and managing the assignment of Egress IPs
+yourself: you should ensure the Egress IP is assigned as one of the IP
+addresses of the primary network interface of the Egress Node instance via
+Amazon EC2 console or AWS CLI.
+
+If you are using High-Availability Egress and let Antrea manage the assignment
+of Egress IPs: at the moment Antrea can only assign the Egress IP to an Egress
+Node at the operating system level (i.e., add the IP to the interface), and you
+still need to ensure the Egress IP is assigned to the Node instance via Amazon
+EC2 console or AWS CLI. To automate it, you can build a Kubernetes Operator
+which watches the Egress API, gets the Egress IP and the Egress Node from the
+status fields, and configures the Egress IP as the secondary IP of the primary
+network interface of the Egress Node instance via the
+[AssignPrivateIpAddresses](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssignPrivateIpAddresses.html)
+API.
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux and "encap"
+mode. The support for Windows and other traffic modes will be added in the
+future.
+
+The previous implementation of Antrea Egress before Antrea v1.7.0 does not work
+with the `strictARP` configuration of `kube-proxy` IPVS mode. The `strictARP`
+configuration is required by some Service load balancing solutions including:
+[Antrea Service external IP management, MetalLB](service-loadbalancer.md#interoperability-with-kube-proxy-ipvs-mode),
+and kube-vip. It means Antrea Egress cannot work together with these solutions
+in a cluster using `kube-proxy` IPVS. The issue was fixed in Antrea v1.7.0.
+
+## Known issues
+
+To support the `EgressSeparateSubnet` feature, VLAN sub-interfaces will be
+created by Antrea Agent on a Node, and the `rp_filter` setting of the VLAN
+sub-interfaces should be set to `2`, which configures loose reverse path
+filtering. In a vanilla Kubernetes cluster, Antrea Agent will set `rp_filter` to
+`2` automatically without user intervention. However, it has been observed that
+the `rp_filter` update by Antrea takes no effect on an OpenShift cluster due to
+[a known issue](https://github.com/antrea-io/antrea/issues/6546). A workaround
+for this issue is to leverage OpenShift Node Tuning Operator to update
+`rp_filter` for all interfaces on all Egress Nodes:
+
+```yaml
+apiVersion: tuned.openshift.io/v1
+kind: Tuned
+metadata:
+ name: antrea
+ namespace: openshift-cluster-node-tuning-operator
+spec:
+ profile:
+ - data: |
+ [main]
+ summary=Update rp_filter for all
+ [sysctl]
+ net.ipv4.conf.all.rp_filter=2
+ name: openshift-antrea
+ recommend:
+ - match:
+ - label: network-role
+ value: egress-gateway
+ priority: 10
+ profile: openshift-antrea
+```
+
+After you apply the above `Tuned` CR named `antrea` in an OpenShift cluster, the
+Node Tuning Operator will reconcile the CR and update
+`net.ipv4.conf.all.rp_filter` to `2` for all the matched Nodes (e.g. all Nodes
+with label `network-role=egress-gateway`). Please refer to the OpenShift
+document about [Using the Node Tuning Operator](https://docs.openshift.com/container-platform/4.16/scalability_and_performance/using-node-tuning-operator.html).
diff --git a/content/docs/v2.2.0-alpha.2/docs/eks-installation.md b/content/docs/v2.2.0-alpha.2/docs/eks-installation.md
new file mode 100644
index 00000000..808c0bdc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/eks-installation.md
@@ -0,0 +1,146 @@
+# Deploying Antrea in AWS EKS
+
+This document describes steps to deploy Antrea in `networkPolicyOnly` mode or `encap` mode to an
+AWS EKS cluster.
+
+## Deploying Antrea in `networkPolicyOnly` mode
+
+In `networkPolicyOnly` mode, Antrea implements NetworkPolicy and other services for an EKS cluster,
+while Amazon VPC CNI takes care of IPAM and Pod traffic routing across Nodes. Refer to
+[the design document](design/policy-only.md) for more information about `networkPolicyOnly` mode.
+
+This document assumes you already have an EKS cluster, and have the `KUBECONFIG` environment variable
+point to the kubeconfig file of that cluster. You can follow [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create the cluster.
+
+With Antrea >=v0.9.0 release, you should apply `antrea-eks-node-init.yaml` before deploying Antrea.
+This will restart existing Pods (except those in host network), so that Antrea can also manage them
+(i.e. enforce NetworkPolicies on them) once it is installed.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks-node-init.yml
+```
+
+To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that EKS support was added in release 0.5.0, which means you cannot
+pick a release older than 0.5.0. For any given release `` (e.g. `v0.5.0`),
+you can deploy Antrea as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-eks.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-eks.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks.yml
+```
+
+Now Antrea should be plugged into the EKS CNI and is ready to enforce NetworkPolicy.
+
+## Deploying Antrea in `encap` mode
+
+In `encap` mode, Antrea acts as the primary CNI of an EKS cluster, and
+implements all Pod networking functionalities, including IPAM and routing across
+Nodes. The major benefit of Antrea as the primary CNI is that it can get rid of
+the Pods per Node limits with Amazon VPC CNI. For example, the default mode of
+VPC CNI allocates a secondary IP for each Pod, and the maximum number of Pods
+that can be created on a Node is decided by the maximum number of elastic
+network interfaces and secondary IPs per interface that can be attached to an
+EC2 instance type. When Antrea is the primary CNI, Pods are connected to the
+Antrea overlay network and Pod IPs are allocated from the private CIDRs
+configured for an EKS cluster, and so the number of Pods per Node is no longer
+limited by the number of secondary IPs per instance.
+
+Note: as a general limitation when using custom CNIs with EKS, Antrea cannot be
+installed to the EKS control plane Nodes. As a result, EKS control plane
+cannot initiate a connection to a Pod in Antrea overlay network, when Antrea
+runs in `encap` mode, and so applications that require control plane to Pod
+connections might not work properly. For example, [Kubernetes API aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation),
+[apiserver proxy](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls),
+or [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers),
+will not work with `encap` mode on EKS, when the Services are provided
+by Pods in overlay network. A workaround is to run such Pods in `hostNetwork`.
+
+### 1. Create an EKS cluster without Nodes
+
+This guide uses `eksctl` to create an EKS cluster, but you can also follow the
+[EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create an EKS cluster. `eksctl` can be installed following the [eksctl guide](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html).
+
+Run the following `eksctl` command to create a cluster named `antrea-eks-cluster`:
+
+```bash
+eksctl create cluster --name antrea-eks-cluster --without-nodegroup
+```
+
+After the command runs successfully, you should be able to access the cluster
+using `kubectl`, for example:
+
+```bash
+kubectl get node
+```
+
+Note, as the cluster does not have a node group configured yet, no Node will be
+returned by the command.
+
+### 2. Delete Amazon VPC CNI
+
+As Antrea is the primary CNI in `encap` mode, the VPC CNI (`aws-node` DaemonSet)
+installed with the EKS cluster needs to be deleted:
+
+```bash
+kubectl -n kube-system delete daemonset aws-node
+```
+
+### 3. Install Antrea
+
+First, download the Antrea deployment yaml. Note that `encap` mode support for
+EKS was added in release 1.4.0, which means you cannot pick a release older
+than 1.4.0. For any given release `` (e.g. `v1.4.0`), get the Antrea
+deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+`encap` mode on EKS requires Antrea's built-in Node IPAM feature to be enabled.
+For information about how to configure Antrea Node IPAM, please refer to
+[Antrea Node IPAM guide](antrea-ipam.md#running-nodeipam-within-antrea-controller).
+
+After enabling Antrea Node IPAM in the deployment yaml, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea.yml
+```
+
+### 4. Create a node group for the EKS cluster
+
+For example, you can run the following command to create a node group of two
+Nodes:
+
+```bash
+eksctl create nodegroup --cluster antrea-eks-cluster --nodes 2
+```
+
+### 5. Validate Antrea installation
+
+After the EKS Nodes are successfully created and booted, you can verify that
+Antrea Controller and Agent Pods are running on the Nodes:
+
+```bash
+$ kubectl get pods --namespace kube-system -l app=antrea
+NAME READY STATUS RESTARTS AGE
+antrea-agent-bpj72 2/2 Running 0 40s
+antrea-agent-j2sjz 2/2 Running 0 40s
+antrea-controller-6f7468cbff-5sk4t 1/1 Running 0 43s
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/external-node.md b/content/docs/v2.2.0-alpha.2/docs/external-node.md
new file mode 100644
index 00000000..7db2da51
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/external-node.md
@@ -0,0 +1,664 @@
+# External Node
+
+## Table of Contents
+
+
+- [What is ExternalNode?](#what-is-externalnode)
+- [Prerequisites](#prerequisites)
+- [The ExternalNode resource](#the-externalnode-resource)
+ - [Name and Namespace](#name-and-namespace)
+ - [Interfaces](#interfaces)
+- [Install Antrea Agent on VM](#install-antrea-agent-on-vm)
+ - [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster)
+ - [Installation on Linux VM](#installation-on-linux-vm)
+ - [Prerequisites on Linux VM](#prerequisites-on-linux-vm)
+ - [Installation steps on Linux VM](#installation-steps-on-linux-vm)
+ - [Service Installation](#service-installation)
+ - [Container Installation](#container-installation)
+ - [Installation on Windows VM](#installation-on-windows-vm)
+ - [Prerequisites on Windows VM](#prerequisites-on-windows-vm)
+ - [Installation steps on Windows VM](#installation-steps-on-windows-vm)
+- [VM network configuration](#vm-network-configuration)
+- [RBAC for antrea-agent](#rbac-for-antrea-agent)
+- [Apply Antrea NetworkPolicy to ExternalNode](#apply-antrea-networkpolicy-to-externalnode)
+ - [Antrea NetworkPolicy configuration](#antrea-networkpolicy-configuration)
+ - [Bypass Antrea NetworkPolicy](#bypass-antrea-networkpolicy)
+- [OpenFlow pipeline](#openflow-pipeline)
+ - [Non-IP packet](#non-ip-packet)
+ - [IP packet](#ip-packet)
+- [Limitations](#limitations)
+
+
+## What is ExternalNode?
+
+`ExternalNode` is a CRD API that enables Antrea to manage the network connectivity
+and security on a Non-Kubernetes Node (like a virtual machine or a bare-metal
+server). It supports specifying which network interfaces on the external Node
+are expected to be protected with Antrea NetworkPolicy rules. The virtual machine
+or bare-metal server represented by an `ExternalNode` resource can be either
+Linux or Windows. "External Node" will be used to designate such a virtual
+machine or bare-metal server in the rest of this document.
+
+Antrea NetworkPolicies are applied to an external Node by leveraging the
+`ExternalEntity` resource. `antrea-controller` creates an `ExternalEntity`
+resource for each network interface specified in the `ExternalNode` resource.
+
+`antrea-agent` is running on the external Node, and it controls network
+connectivity and security by attaching the network interface(s) to an OVS bridge.
+A [new OpenFlow pipeline](#openflow-pipeline) has been implemented, dedicated to
+the ExternalNode feature.
+
+You may be interested in using this capability for the below scenarios:
+
+- To apply Antrea NetworkPolicy to an external Node.
+- You want the same security configurations on the external Node for all
+ Operating Systems.
+
+This guide demonstrates how to configure `ExternalNode` to achieve the above
+result.
+
+## Prerequisites
+
+`ExternalNode` is introduced in v1.8 as an alpha feature. The feature gate
+`ExternalNode` must be enabled in the `antrea-controller` and `antrea-agent`
+configuration. The configuration for `antrea-controller` is modified in the
+`antrea-config` ConfigMap as follows for the feature to work:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ ExternalNode: true
+```
+
+The `antrea-controller` implements the `antrea` Service, which accepts
+connections from each `antrea-agent` and is an important part of the
+NetworkPolicy implementation. By default, the `antrea` Service has type
+`ClusterIP`. Because external Nodes run outside of the Kubernetes cluster, they
+cannot directly access the `ClusterIP` address. Therefore, the `antrea` Service
+needs to become externally-accessible, by changing its type to `NodePort` or
+`LoadBalancer`.
+
+Since `antrea-agent` is running on an external Node which is not managed by
+Kubernetes, a configuration file needs to be present on each machine where the
+`antrea-agent` is running, and the path to this file will be provided to the
+`antrea-agent` as a command-line argument. Refer to the [sample configuration](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/conf/antrea-agent.conf)
+to learn the`antrea-agent` configuration options when running on an external Node.
+
+A further [section](#install-antrea-agent-on-vm) will provide detailed steps
+for running the `antrea-agent` on a VM.
+
+## The ExternalNode resource
+
+An example `ExternalNode` resource:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: ExternalNode
+metadata:
+ name: vm1
+ namespace: vm-ns
+ labels:
+ role: db
+spec:
+ interfaces:
+ - ips: [ "172.16.100.3" ]
+ name: ""
+```
+
+Note: **Only one interface is supported for Antrea v1.8**.
+
+### Name and Namespace
+
+The `name` field in an `ExternalNode` uniquely identifies an external Node.
+The `ExternalNode` name is provided to `antrea-agent` via an environment
+variable `NODE_NAME`, otherwise `antrea-agent` will use the hostname to find
+the `ExternalNode` resource if `NODE_NAME` is not set.
+
+`ExternalNode` resource is `Namespace` scoped. The `Namespace` is provided to
+`antrea-agent` with option `externalNodeNamespace` in
+[antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/conf/antrea-agent.conf).
+
+```yaml
+externalNodeNamespace: vm-ns
+```
+
+### Interfaces
+
+The `interfaces` field specifies the list of the network interfaces expected to
+be guarded by Antrea NetworkPolicy. At least one interface is required. Interface
+`name` or `ips` is used to identify the target interface. **The field `ips`
+must be provided in the CRD**, but `name` is optional. Multiple IPs on a single
+interface is supported. In the case that multiple `interfaces` are configured,
+`name` must be specified for every `interface`.
+
+`antrea-controller` creates an `ExternalEntity` for each interface whenever an
+`ExternalNode` is created. The created `ExternalEntity` has the following
+characteristics:
+
+- It is configured within the same Namespace as the `ExternalNode`.
+- The `name` is generated according to the following principles:
+ - Use the `ExternalNode` name directly, if there is only one interface, and
+ interface name is not specified.
+ - Use the format `$ExternalNode.name-$hash($interface.name)[:5]` for other
+ cases.
+- The `externalNode` field is set with the `ExternalNode` name.
+- The `owner` is referring to the `ExternalNode` resource.
+- All labels added on `ExternalNode` are copied to the `ExternalEntity`.
+- Each IP address of the interface is added as an endpoint in the `endpoints`
+ list, and the interface name is used as the endpoint name if it is set.
+
+The `ExternalEntity` resource created for the above `ExternalNode` interface
+would look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: ExternalEntity
+metadata:
+ labels:
+ role: db
+ name: vm1
+ namespace: vm-ns
+ ownerReferences:
+ - apiVersion: v1alpha1
+ kind: ExternalNode
+ name: vm1
+ uid: 99b09671-72da-4c64-be93-17185e9781a5
+ resourceVersion: "5513"
+ uid: 5f360f32-7806-4d2d-9f36-80ce7db8de10
+spec:
+ endpoints:
+ - ip: 172.16.100.3
+ externalNode: vm1
+```
+
+## Install Antrea Agent on VM
+
+### Prerequisites on Kubernetes cluster
+
+1. Enable `ExternalNode` feature on the `antrea-controller`, and expose the
+ antrea Service externally (e.g., as a NodePort Service).
+2. Create a Namespace for `antrea-agent`. This document will use `vm-ns` as an
+ example Namespace for illustration.
+
+ ```bash
+ kubectl create ns vm-ns
+ ```
+
+3. Create a ServiceAccount, ClusterRole and ClusterRoleBinding for `antrea-agent`
+ as shown below. If you use a Namespace other than `vm-ns`, you need to update
+ the [VM RBAC manifest](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/vm-agent-rbac.yml) and
+ change `vm-ns` to the right Namespace.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/externalnode/vm-agent-rbac.yml
+ ```
+
+4. Create `antrea-agent.kubeconfig` file for `antrea-agent` to access the K8S
+ API server.
+
+ ```bash
+ CLUSTER_NAME="kubernetes"
+ SERVICE_ACCOUNT="vm-agent"
+ NAMESPACE="vm-ns"
+ KUBECONFIG="antrea-agent.kubeconfig"
+ APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
+ TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath="{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}"|base64 --decode)
+ kubectl config --kubeconfig=$KUBECONFIG set-cluster $CLUSTER_NAME --server=$APISERVER --insecure-skip-tls-verify=true
+ kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN
+ kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$CLUSTER_NAME --cluster=$CLUSTER_NAME --user=antrea-agent
+ kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$CLUSTER_NAME
+ # Copy antrea-agent.kubeconfig to the VM
+ ```
+
+5. Create `antrea-agent.antrea.kubeconfig` file for `antrea-agent` to access
+ the `antrea-controller` API server.
+
+ ```bash
+ # Specify the antrea-controller API server endpoint. Antrea-Controller needs
+ # to be exposed via the Node IP or a public IP that is reachable from the VM
+ ANTREA_API_SERVER="https://172.18.0.1:443"
+ ANTREA_CLUSTER_NAME="antrea"
+ NAMESPACE="vm-ns"
+ KUBECONFIG="antrea-agent.antrea.kubeconfig"
+ TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath="{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}"|base64 --decode)
+ kubectl config --kubeconfig=$KUBECONFIG set-cluster $ANTREA_CLUSTER_NAME --server=$ANTREA_API_SERVER --insecure-skip-tls-verify=true
+ kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN
+ kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$ANTREA_CLUSTER_NAME --cluster=$ANTREA_CLUSTER_NAME --user=antrea-agent
+ kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$ANTREA_CLUSTER_NAME
+ # Copy antrea-agent.antrea.kubeconfig to the VM
+ ```
+
+6. Create an `ExternalNode` resource for the VM.
+
+ After preparing the `ExternalNode` configuration yaml for the VM, we can
+ apply it in the cluster.
+
+ ```bash
+ cat << EOF | kubectl apply -f -
+ apiVersion: crd.antrea.io/v1alpha1
+ kind: ExternalNode
+ metadata:
+ name: vm1
+ namespace: vm-ns
+ labels:
+ role: db
+ spec:
+ interfaces:
+ - ips: [ "172.16.100.3" ]
+ name: ""
+ EOF
+ ```
+
+### Installation on Linux VM
+
+#### Prerequisites on Linux VM
+
+OVS needs to be installed on the VM. For more information about OVS installation
+please refer to the [getting-started guide](getting-started.md#open-vswitch).
+
+#### Installation steps on Linux VM
+
+`Antrea Agent` can be installed as a native service or can be installed in a container.
+
+##### Service Installation
+
+1. Build `antrea-agent` binary in the root of the Antrea code tree and copy the
+ `antrea-agent` binary from the `bin` directory to the Linux VM.
+
+ ```bash
+ make docker-bin
+ ```
+
+2. Copy configuration files to the VM, including [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/conf/antrea-agent.conf),
+ which specifies agent configuration parameters;
+ `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were
+ generated in steps 4 and 5 of [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster).
+
+3. Bootstrap `antrea-agent` using one of these 2 methods:
+
+ 1. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/externalnode/install-vm.sh)
+ as shown below (Ubuntu 18.04 and 20.04, and Red Hat Enterprise Linux 8.4).
+
+ ```bash
+ ./install-vm.sh --ns vm-ns --bin ./antrea-agent --config ./antrea-agent.conf \
+ --kubeconfig ./antrea-agent.kubeconfig \
+ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --nodename vm1
+ ```
+
+ 2. Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file
+ to set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace`
+ to the correct values.
+
+ ```bash
+ AGENT_NAMESPACE="vm-ns"
+ AGENT_CONF_PATH="/etc/antrea"
+ mkdir -p $AGENT_CONF_PATH
+ # Copy antrea-agent kubeconfig files
+ cp ./antrea-agent.kubeconfig $AGENT_CONF_PATH
+ cp ./antrea-agent.antrea.kubeconfig $AGENT_CONF_PATH
+ # Update clientConnection and antreaClientConnection
+ sed -i "s|kubeconfig: |kubeconfig: $AGENT_CONF_PATH/|g" antrea-agent.conf
+ sed -i "s|#externalNodeNamespace: default|externalNodeNamespace: $AGENT_NAMESPACE|g" antrea-agent.conf
+ # Copy antrea-agent configuration file
+ cp ./antrea-agent.conf $AGENT_CONF_PATH
+ ```
+
+ Then create `antrea-agent` service. Below is a sample snippet to start
+ `antrea-agent` as a service on Ubuntu 18.04 or later:
+
+ Note: Environment variable `NODE_NAME` needs to be set in the service
+ configuration, if the VM's hostname is different from the name defined in
+ the `ExternalNode` resource.
+
+ ```bash
+ AGENT_BIN_PATH="/usr/sbin"
+ AGENT_LOG_PATH="/var/log/antrea"
+ mkdir -p $AGENT_BIN_PATH
+ mkdir -p $AGENT_LOG_PATH
+ cat << EOF > /etc/systemd/system/antrea-agent.service
+ Description="antrea-agent as a systemd service"
+ After=network.target
+ [Service]
+ Environment="NODE_NAME=vm1"
+ ExecStart=$AGENT_BIN_PATH/antrea-agent \
+ --config=$AGENT_CONF_PATH/antrea-agent.conf \
+ --logtostderr=false \
+ --log_file=$AGENT_LOG_PATH/antrea-agent.log
+ Restart=on-failure
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ sudo systemctl daemon-reload
+ sudo systemctl enable antrea-agent
+ sudo systemctl start antrea-agent
+ ```
+
+##### Container Installation
+
+1. `Docker` is used as the container runtime for Linux VMs. The Docker image can be built from source code
+ or can be downloaded from the Antrea repository.
+
+ 1. From Source
+
+ Build `antrea-agent-ubuntu` Docker image in the root of the Antrea code tree.
+
+ ```bash
+ make build-agent-ubuntu
+ ```
+
+ Note: The image repository name should be `antrea/antrea-agent-ubuntu` and tag should be `latest`.
+
+ Copy the `antrea/antrea-agent-ubuntu:latest` image to the target VM. Please follow
+ the below steps.
+
+ ```bash
+ # Save it in a tar file
+ docker save -o antrea/antrea-agent-ubuntu:latest
+
+ # Copy this tar file to the target VM.
+ # Then load that image on the target VM.
+ docker load -i
+ ```
+
+ 2. Docker Repository
+
+ The released version of `antrea-agent-ubuntu` Docker image can be downloaded from Antrea `Dockerhub`
+ repository. Pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases). For any given
+ release `` (e.g. `v1.15.0`), download `antrea-agent-ubuntu` Docker image as follows:
+
+ ```bash
+ docker pull antrea/antrea-agent-ubuntu:
+ ```
+
+ The [installation script](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/externalnode/install-vm.sh) automatically downloads the specific released
+ version of `antrea-agent-ubuntu` Docker image on VM by specifying the installation argument `--antrea-version`.
+ Also, the script automatically loads that image into Docker. For any given release `` (e.g. `v1.15.0`),
+ specify it in the --antrea-version argument as follows.
+
+ ```bash
+ --antrea-version
+ ```
+
+2. Copy configuration files to the VM, including [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/conf/antrea-agent.conf),
+ which specifies agent configuration parameters;
+ `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were
+ generated in steps 4 and 5 of [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster).
+
+3. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/externalnode/install-vm.sh)
+ as shown below (Ubuntu 18.04, 20.04, and Rhel 8.4).
+
+ ```bash
+ ./install-vm.sh --ns vm-ns --config ./antrea-agent.conf \
+ --kubeconfig ./antrea-agent.kubeconfig \
+ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --containerize --antrea-version v1.9.0
+ ```
+
+### Installation on Windows VM
+
+#### Prerequisites on Windows VM
+
+1. Enable the Windows Hyper-V optional feature on Windows VM.
+
+ ```powershell
+ Install-WindowsFeature Hyper-V-Powershell
+ Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart
+ ```
+
+2. OVS needs to be installed on the VM. For more information about OVS
+ installation please refer to the [Antrea Windows documentation](windows.md#1-optional-install-ovs-provided-by-antrea-or-your-own).
+3. Download [nssm](https://nssm.cc/download) which will be used to create the
+ Windows service for `antrea-agent`.
+
+Note: Only Windows Server 2019 is supported in the first release at the moment.
+
+#### Installation steps on Windows VM
+
+1. Build `antrea-agent` binary in the root of the antrea code tree and copy the
+ `antrea-agent` binary from the `bin` directory to the Windows VM.
+
+ ```bash
+ #! /bin/bash
+ make docker-windows-bin
+ ```
+
+2. Copy [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/conf/antrea-agent.conf),
+ `antrea-agent.kubeconfig` and `antrea-agent.antrea.kubeconfig` files to the
+ VM. Please refer to the step 2 of [Installation on Linux VM](#installation-steps-on-linux-vm)
+ section for more information.
+
+ ```powershell
+ $WIN_AGENT_CONF_PATH="C:\antrea-agent\conf"
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_CONF_PATH
+ # Copy antrea-agent kubeconfig files
+ Copy-Item .\antrea-agent.kubeconfig $WIN_AGENT_CONF_PATH
+ Copy-Item .\antrea-agent.antrea.kubeconfig $WIN_AGENT_CONF_PATH
+ # Copy antrea-agent configuration file
+ Copy-Item .\antrea-agent.conf $WIN_AGENT_CONF_PATH
+ ```
+
+3. Bootstrap `antrea-agent` using one of these 2 methods:
+
+ 1. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/externalnode/install-vm.ps1)
+ as shown below (only Windows Server 2019 is tested and supported).
+
+ ```powershell
+ .\Install-vm.ps1 -Namespace vm-ns -BinaryPath .\antrea-agent.exe `
+ -ConfigPath .\antrea-agent.conf -KubeConfigPath .\antrea-agent.kubeconfig `
+ -AntreaKubeConfigPath .\antrea-agent.antrea.kubeconfig `
+ -InstallDir C:\antrea-agent -NodeName vm1
+ ```
+
+ 2. Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file to
+ set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace`
+ to the correct values.
+ Configure environment variable `NODE_NAME` if the VM's hostname is different
+ from the name defined in the `ExternalNode` resource.
+
+ ```powershell
+ [Environment]::SetEnvironmentVariable("NODE_NAME", "vm1")
+ [Environment]::SetEnvironmentVariable("NODE_NAME", "vm1", [System.EnvironmentVariableTarget]::Machine)
+ ```
+
+ Then create `antrea-agent` service using nssm. Below is a sample snippet to start
+ `antrea-agent` as a service:
+
+ ```powershell
+ $WIN_AGENT_BIN_PATH="C:\antrea-agent"
+ $WIN_AGENT_LOG_PATH="C:\antrea-agent\logs"
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_BIN_PATH
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_LOG_PATH
+ Copy-Item .\antrea-agent.exe $WIN_AGENT_BIN_PATH
+ nssm.exe install antrea-agent $WIN_AGENT_BIN_PATH\antrea-agent.exe --config $WIN_AGENT_CONF_PATH\antrea-agent.conf --log_file $WIN_AGENT_LOG_PATH\antrea-agent.log --logtostderr=false
+ nssm.exe start antrea-agent
+ ```
+
+## VM network configuration
+
+`antrea-agent` uses the interface IPs or name to find the network interface on
+the external Node, and then attaches it to the OVS bridge. The network interface
+is attached to OVS as uplink, and a new OVS internal Port is created to take over
+the uplink interface's IP/MAC and routing configurations. On Windows, the DNS
+configurations are also moved to the OVS internal port from uplink. Before
+attaching the uplink to OVS, the network interface is renamed with a suffix
+"~", and OVS internal port is configured with the original name of the uplink.
+As a result, IP/MAC/routing entries are seen on a network interface configuring
+with the same name on the external Node.
+
+The outbound traffic sent from the external Node enters OVS from the internal
+port, and finally output from the uplink, and the inbound traffic enters OVS
+from the uplink and output to the internal port. The IP packet is processed by
+the OpenFlow pipeline, and the non-IP packet is forwarded directly.
+
+The following diagram depicts the OVS bridge and traffic forwarding on an
+external Node:
+![Traffic On ExternalNode](assets/traffic_external_node.svg)
+
+## RBAC for antrea-agent
+
+An external Node is regarded as an untrusted entity on the network. To follow
+the least privilege principle, the RBAC configuration for `antrea-agent`
+running on an external Node is as follows:
+
+- Only `get`, `list` and `watch` permissions are given on resource `ExternalNode`
+- Only `update` permission is given on resource `antreaagentinfos`, and `create`
+ permission is moved to `antrea-controller`
+
+For more details please refer to [vm-agent-rbac.yml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/vm-agent-rbac.yml)
+
+`antrea-agent` reports its status by updating the `antreaagentinfo` resource
+which is created with the same name as the `ExternalNode`. `antrea-controller`
+creates an `antreaagentinfo` resource for each new `ExternalNode`, and then
+`antrea-agent` updates it every minute with its latest status. `antreaagentinfo`
+is deleted by `antrea-controller` when the `ExternalNode` is deleted.
+
+## Apply Antrea NetworkPolicy to ExternalNode
+
+### Antrea NetworkPolicy configuration
+
+An Antrea NetworkPolicy is applied to an `ExternalNode` by providing an
+`externalEntitySelector` in the `appliedTo` field. The `ExternalEntity`
+resource is automatically created for each interface of an `ExternalNode`.
+`ExternalEntity` resources are used by `antrea-controller` to process the
+NetworkPolicies, and each `antrea-agent` (including those running on external
+Nodes) receives the appropriate internal AntreaNetworkPolicy objects.
+
+Following types of (from/to) network peers are supported in an Antrea
+NetworkPolicy applied to an external Node:
+
+- ExternalEntities selected by an `externalEntitySelector`
+- An `ipBlock`
+- A FQDN address in an egress rule
+
+Following actions are supported in an Antrea NetworkPolicy applied to an
+external Node:
+
+- Allow
+- Drop
+- Reject
+
+Below is an example of applying an Antrea NetworkPolicy to the external Nodes
+labeled with `role=db` to reject SSH connections from IP "172.16.100.5" or from
+other external Nodes labeled with `role=front`:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: annp1
+ namespace: vm-ns
+spec:
+ priority: 9000.0
+ appliedTo:
+ - externalEntitySelector:
+ matchLabels:
+ role: db
+ ingress:
+ - action: Reject
+ ports:
+ - protocol: TCP
+ port: 22
+ from:
+ - externalEntitySelector:
+ matchLabels:
+ role: front
+ - ipBlock:
+ cidr: 172.16.100.5/32
+```
+
+### Bypass Antrea NetworkPolicy
+
+In some cases, users may want some particular traffic to bypass Antrea
+NetworkPolicy rules on an external Node, e.g.,the SSH connection from a special
+host to the external Node. `policyBypassRules` can be added in the agent
+configuration to define traffic that needs to bypass NetworkPolicy enforcement.
+Below is a configuration example:
+
+```yaml
+policyBypassRules:
+ - direction: ingress
+ protocol: tcp
+ cidr: 1.1.1.1/32
+ port: 22
+```
+
+The `direction` can be `ingress` or `egress`. The supported protocols include:
+`tcp`,`udp`, `icmp` and `ip`. The `cidr` gives the peer address, which is the
+destination in an `egress` rule, and the source in an `ingress` rule. For `tcp`
+and `udp` protocols, the `port` is required to specify the destination port.
+
+## OpenFlow pipeline
+
+A new OpenFlow pipeline is implemented by `antrea-agent` dedicated for
+`ExternalNode` feature.
+
+![OVS pipeline](assets/ovs-pipeline-external-node.svg)
+
+### Non-IP packet
+
+`NonIPTable` is a new OpenFlow table introduced only on external Nodes,
+which is dedicated to all non-IP packets. A non-IP packet is forwarded between
+the pair ports directly, e.g., a non-IP packet entering OVS from the uplink
+interface is output to the paired internal port, and a packet from the internal
+port is output to the uplink.
+
+### IP packet
+
+A new OpenFlow pipeline is set up on external Nodes to process IP packets.
+Antrea NetworkPolicy enforcement is the major function in this new pipeline, and
+the OpenFlow tables used are similar to the Pod pipeline. No L3 routing is
+provided on an external Node, and a simple L2 forwarding policy is implemented.
+OVS connection tracking is used to assist the NetworkPolicy function; as a result
+only the first packet is validated by the OpenFlow entries, and the subsequent
+packets in an accepted connection are allowed directly.
+
+- Egress/Ingress Tables
+
+Table `XgressSecurityClassifierTable` is installed in both `stageEgressSecurity`
+and `stageIngressSecurity`, which is used to install the OpenFlow entries for
+the [`policyBypassRules`](#bypass-antrea-networkpolicy) in the agent configuration.
+
+This is an example of the OpenFlow entry for the above configuration:
+
+```yaml
+table=IngressSecurityClassifier, priority=200,ct_state=+new+trk,tcp,nw_src=1.1.1.1,tp_dst=22 actions=resubmit(,IngressMetric)
+```
+
+Other OpenFlow tables in `stageEgressSecurity` and `stageIngressSecurity` are
+the same as those installed on a Kubernetes worker Node. For more details about
+these tables, please refer to the general [introduction](design/ovs-pipeline.md)
+of Antrea OVS pipeline.
+
+- L2 Forwarding Tables
+
+`L2ForwardingCalcTable` is used to calculate the expected output port of an IP
+packet. As the pair ports with the internal port and uplink always exist on the
+OVS bridge, and both interfaces are configured with the same MAC address, the
+match condition of an OpenFlow entry in `L2ForwardingCalcTable` uses the input
+port number but not the MAC address of the packet. The flow actions are:
+
+1) set flag `OutputToOFPortRegMark`, and
+2) set the peer port as the `TargetOFPortField`, and
+3) enforce the packet to go to stageIngressSecurity.
+
+Below is an example OpenFlow entry in `L2ForwardingCalcTable`
+
+```yaml
+table=L2ForwardingCalc, priority=200,ip,in_port=ens224 actions=load:0x1->NXM_NX_REG0[8],load:0x7->NXM_NX_REG1[],resubmit(,IngressSecurityClassifier)
+table=L2ForwardingCalc, priority=200,ip,in_port="ens224~" actions=load:0x1->NXM_NX_REG0[8],load:0x8->NXM_NX_REG1[],resubmit(,IngressSecurityClassifier)
+```
+
+## Limitations
+
+This feature currently supports only one interface per `ExternalNode` object,
+and `ips` must be set in the interface. The support for multiple network
+interfaces will be added in the future.
+
+`ExternalNode` name must be unique in the `cluster` scope even though it is
+itself a Namespaced resource.
diff --git a/content/docs/v2.2.0-alpha.2/docs/feature-gates.md b/content/docs/v2.2.0-alpha.2/docs/feature-gates.md
new file mode 100644
index 00000000..41da9eaa
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/feature-gates.md
@@ -0,0 +1,533 @@
+# Antrea Feature Gates
+
+This page contains an overview of the various features an administrator can turn on or off for Antrea components. We
+follow the same convention as the
+[Kubernetes feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/).
+
+In particular:
+
+* a feature in the Alpha stage will be disabled by default but can be enabled by editing the appropriate `.conf` entry
+ in the Antrea manifest.
+* a feature in the Beta stage will be enabled by default but can be disabled by editing the appropriate `.conf` entry in
+ the Antrea manifest.
+* a feature in the GA stage will be enabled by default and cannot be disabled.
+
+Some features are specific to the Agent, others are specific to the Controller, and some apply to both and should be
+enabled / disabled consistently in both
+`.conf` entries.
+
+To enable / disable a feature, edit the Antrea manifest appropriately. For example, to enable `FeatureGateFoo` on Linux,
+edit the Agent configuration in the
+`antrea` ConfigMap as follows:
+
+```yaml
+ antrea-agent.conf: |
+ # FeatureGates is a map of feature names to bools that enable or disable experimental features.
+ featureGates:
+ # Enable the feature gate.
+ FeatureGateFoo: true
+```
+
+## List of Available Features
+
+| Feature Name | Component | Default | Stage | Alpha Release | Beta Release | GA Release | Extra Requirements | Notes |
+| ----------------------------- | ------------------ | ------- | ----- | ------------- | ------------ | ---------- | ------------------ | --------------------------------------------- |
+| `AntreaProxy` | Agent | `true` | GA | v0.8 | v0.11 | v1.14 | Yes | Must be enabled for Windows. |
+| `EndpointSlice` | Agent | `true` | GA | v0.13.0 | v1.11 | v1.14 | Yes | |
+| `TopologyAwareHints` | Agent | `true` | Beta | v1.8 | v1.12 | N/A | Yes | |
+| `ServiceTrafficDistribution` | Agent | `true` | Beta | N/A | v2.2 | N/A | Yes | |
+| `CleanupStaleUDPSvcConntrack` | Agent | `true` | Beta | v1.13 | v2.1 | N/A | Yes | |
+| `LoadBalancerModeDSR` | Agent | `false` | Alpha | v1.13 | N/A | N/A | Yes | |
+| `AntreaPolicy` | Agent + Controller | `true` | Beta | v0.8 | v1.0 | N/A | No | Agent side config required from v0.9.0+. |
+| `Traceflow` | Agent + Controller | `true` | Beta | v0.8 | v0.11 | N/A | Yes | |
+| `FlowExporter` | Agent | `false` | Alpha | v0.9 | N/A | N/A | Yes | |
+| `NetworkPolicyStats` | Agent + Controller | `true` | Beta | v0.10 | v1.2 | N/A | No | |
+| `NodePortLocal` | Agent | `true` | GA | v0.13 | v1.4 | v1.14 | Yes | Important user-facing change in v1.2.0 |
+| `Egress` | Agent + Controller | `true` | Beta | v1.0 | v1.6 | N/A | Yes | |
+| `NodeIPAM` | Controller | `true` | Beta | v1.4 | v1.12 | N/A | Yes | |
+| `AntreaIPAM` | Agent + Controller | `false` | Alpha | v1.4 | N/A | N/A | Yes | |
+| `Multicast` | Agent + Controller | `true` | Beta | v1.5 | v1.12 | N/A | Yes | |
+| `SecondaryNetwork` | Agent | `false` | Alpha | v1.5 | N/A | N/A | Yes | |
+| `ServiceExternalIP` | Agent + Controller | `false` | Alpha | v1.5 | N/A | N/A | Yes | |
+| `TrafficControl` | Agent | `false` | Alpha | v1.7 | N/A | N/A | No | |
+| `Multicluster` | Agent + Controller | `false` | Alpha | v1.7 | N/A | N/A | Yes | Controller side feature gate added in v1.10.0 |
+| `IPsecCertAuth` | Agent + Controller | `false` | Alpha | v1.7 | N/A | N/A | No | |
+| `ExternalNode` | Agent | `false` | Alpha | v1.8 | N/A | N/A | Yes | |
+| `SupportBundleCollection` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | |
+| `L7NetworkPolicy` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | |
+| `AdminNetworkPolicy` | Controller | `false` | Alpha | v1.13 | N/A | N/A | Yes | |
+| `EgressTrafficShaping` | Agent | `false` | Alpha | v1.14 | N/A | N/A | Yes | OVS meters should be supported |
+| `EgressSeparateSubnet` | Agent | `false` | Alpha | v1.15 | N/A | N/A | No | |
+| `NodeNetworkPolicy` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | |
+| `L7FlowExporter` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | |
+| `BGPPolicy` | Agent | `false` | Alpha | v2.1 | N/A | N/A | No | |
+| `NodeLatencyMonitor` | Agent | `false` | Alpha | v2.1 | N/A | N/A | No | |
+
+## Description and Requirements of Features
+
+### AntreaProxy
+
+`AntreaProxy` enables Antrea Proxy which implements Service load-balancing for ClusterIP Services as part of the OVS
+pipeline, as opposed to relying on kube-proxy. By default, this only applies to traffic originating from Pods, and
+destined to ClusterIP Services. However, it can be configured to support all types of Services, replacing kube-proxy
+entirely. Please refer to this [document](antrea-proxy.md) for extra information on Antrea Proxy and how it can be configured.
+
+Note that this feature must be enabled for Windows. The Antrea Windows YAML manifest provided as part of releases
+enables this feature by default. If you edit the manifest, make sure you do not disable it, as it is needed for correct
+NetworkPolicy implementation for Pod-to-Service traffic.
+
+#### Requirements for this Feature
+
+When using the OVS built-in kernel module (which is the most common case), your kernel version must be >= 4.6 (as
+opposed to >= 4.4 without this feature).
+
+### EndpointSlice
+
+`EndpointSlice` enables Service EndpointSlice support in Antrea Proxy. The EndpointSlice API was introduced in Kubernetes
+1.16 (alpha) and it is enabled by default in Kubernetes 1.17 (beta), promoted to GA in Kubernetes 1.21. The EndpointSlice
+feature will take no effect if Antrea Proxy is not enabled. Refer to this [link](https://kubernetes.io/docs/tasks/administer-cluster/enabling-endpointslices/)
+for more information about EndpointSlice. If this feature is enabled but the EndpointSlice v1 API is not available
+(Kubernetes version is lower than 1.21), Antrea Agent will log a message and fallback to the Endpoints API.
+
+#### Requirements for this Feature
+
+- EndpointSlice v1 API is available (Kubernetes version >=1.21).
+- Option `antreaProxy.enable` is set to true.
+
+### TopologyAwareHints
+
+`TopologyAwareHints` enables Topology Aware Routing support in Antrea Proxy. For Antrea Proxy, traffic can be routed to the
+Endpoint which is closer to where it originated when this feature is enabled. Prior to Kubernetes 1.27, this feature was known as Topology Aware Hints.
+Refer to this [link](https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/) for more information about Topology Aware Routing.
+
+#### Requirements for this Feature
+
+- Option `antreaProxy.enable` is set to true.
+- EndpointSlice API version v1 is available in Kubernetes.
+
+### ServiceTrafficDistribution
+
+`ServiceTrafficDistribution` enables Traffic Distribution for Services in Antrea Proxy. This feature allows for more
+flexible and intelligent routing decisions by considering both topology and non-topology factors. For more details,
+refer to this [link](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/4444-service-traffic-distribution).
+
+#### Requirements for this Feature
+
+- Option `antreaProxy.enable` is set to true.
+- EndpointSlice API version v1 is available in Kubernetes.
+- Kubernetes must be version 1.30 or higher, with the `ServiceTrafficDistribution` feature gate (a Kubernetes-specific
+ feature gate) enabled.
+
+### LoadBalancerModeDSR
+
+`LoadBalancerModeDSR` allows users to specify the load balancer mode as DSR (Direct Server Return). The load balancer
+mode determines how external traffic destined to LoadBalancerIPs and ExternalIPs of Services is processed when it's load
+balanced across Nodes. In DSR mode, external traffic is never SNAT'd and backend Pods running on Nodes that are not the
+ingress Node can reply to clients directly, bypassing the ingress Node. Therefore, DSR mode can preserve client IP of
+requests, and usually has lower latency and higher throughput. It's only meaningful to use this feature when Antrea Proxy
+is enabled and configured to proxy external traffic (proxyAll=true). Refer to this [link](
+antrea-proxy.md#configuring-load-balancer-mode-for-external-traffic) for more information about load balancer mode.
+
+#### Requirements for this Feature
+
+- Options `antreaProxy.enable` and `antreaProxy.proxyAll` are set to true.
+- IPv4 only.
+- Linux Nodes only.
+- Encap mode only.
+
+### CleanupStaleUDPSvcConntrack
+
+`CleanupStaleUDPSvcConntrack` enables support for cleaning up stale UDP Service conntrack connections in Antrea Proxy.
+
+#### Requirements for this Feature
+
+Option `antreaProxy.enable` is set to true.
+
+### AntreaPolicy
+
+`AntreaPolicy` enables Antrea ClusterNetworkPolicy and Antrea NetworkPolicy CRDs to be handled by Antrea
+controller. `ClusterNetworkPolicy` is an Antrea-specific extension to K8s NetworkPolicies, which enables cluster admins
+to define security policies which apply to the entire cluster. `Antrea NetworkPolicy` also complements K8s
+NetworkPolicies by supporting policy priorities and rule actions. Refer to this [document](antrea-network-policy.md) for
+more information.
+
+#### Requirements for this Feature
+
+None
+
+### Traceflow
+
+`Traceflow` enables a CRD API for Antrea that supports generating tracing requests for traffic going through the
+Antrea-managed Pod network. This is useful for troubleshooting connectivity issues, e.g. determining if a NetworkPolicy
+is responsible for traffic drops between two Pods. Refer to this [document](traceflow-guide.md) for more information.
+
+#### Requirements for this Feature
+
+Until Antrea v0.11, this feature could only be used in "encap" mode, with the Geneve tunnel type (default configuration
+for both Linux and Windows). In v0.11, this feature was graduated to Beta (enabled by default) and this requirement was
+lifted.
+
+In order to support cluster Services as the destination for tracing requests, option `antreaProxy.enable` should be set
+to true to enable Antrea Proxy.
+
+### Flow Exporter
+
+`Flow Exporter` is a feature that runs as part of the Antrea Agent, and enables network flow visibility into a
+Kubernetes cluster. Flow exporter sends IPFIX flow records that are built from observed connections in Conntrack module
+to a flow collector. Refer to this [document](network-flow-visibility.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux. Windows support will be added in the future.
+
+### NetworkPolicyStats
+
+`NetworkPolicyStats` enables collecting NetworkPolicy statistics from antrea-agents and exposing them through Antrea
+Stats API, which can be accessed by kubectl get commands, e.g. `kubectl get networkpolicystats`. The statistical data
+includes total number of sessions, packets, and bytes allowed or denied by a NetworkPolicy. It is collected
+asynchronously so there may be a delay of up to 1 minute for changes to be reflected in API responses. The feature
+supports K8s NetworkPolicies and Antrea-native policies, the latter of which requires
+`AntreaPolicy` to be enabled. Usage examples:
+
+```bash
+# List stats of all K8s NetworkPolicies.
+> kubectl get networkpolicystats -A
+NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT
+default access-nginx 3 36 5199 2020-09-07T13:19:38Z
+kube-system access-dns 1 12 1221 2020-09-07T13:22:42Z
+
+# List stats of all Antrea ClusterNetworkPolicies.
+> kubectl get antreaclusternetworkpolicystats
+NAME SESSIONS PACKETS BYTES CREATED AT
+cluster-deny-egress 3 36 5199 2020-09-07T13:19:38Z
+cluster-access-dns 10 120 12210 2020-09-07T13:22:42Z
+
+# List stats of all Antrea NetworkPolicies.
+> kubectl get antreanetworkpolicystats -A
+NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT
+default access-http 3 36 5199 2020-09-07T13:19:38Z
+foo bar 1 12 1221 2020-09-07T13:22:42Z
+
+# List per-rule statistics for Antrea ClusterNetworkPolicy cluster-access-dns.
+# Both Antrea NetworkPolicy and Antrea ClusterNetworkPolicy support per-rule statistics.
+> kubectl get antreaclusternetworkpolicystats cluster-access-dns -o json
+{
+ "apiVersion": "stats.antrea.io/v1alpha1",
+ "kind": "AntreaClusterNetworkPolicyStats",
+ "metadata": {
+ "creationTimestamp": "2022-02-24T09:04:53Z",
+ "name": "cluster-access-dns",
+ "uid": "940cf76a-d836-4e76-b773-d275370b9328"
+ },
+ "ruleTrafficStats": [
+ {
+ "name": "rule1",
+ "trafficStats": {
+ "bytes": 392,
+ "packets": 4,
+ "sessions": 1
+ }
+ },
+ {
+ "name": "rule2",
+ "trafficStats": {
+ "bytes": 111,
+ "packets": 2,
+ "sessions": 1
+ }
+ }
+ ],
+ "trafficStats": {
+ "bytes": 503,
+ "packets": 6,
+ "sessions": 2
+ }
+}
+```
+
+#### Requirements for this Feature
+
+None
+
+### NodePortLocal
+
+`NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent, through which each port of a Service backend
+Pod can be reached from the external network using a port of the Node on which the Pod is running. NPL enables better
+integration with external Load Balancers which can take advantage of the feature: instead of relying on NodePort
+Services implemented by kube-proxy, external Load-Balancers can consume NPL port mappings published by the Antrea
+Agent (as K8s Pod annotations) and load-balance Service traffic directly to backend Pods. Refer to
+this [document](node-port-local.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux with IPv4 addresses. Only TCP & UDP Service ports are
+supported (not SCTP).
+
+### Egress
+
+`Egress` enables a CRD API for Antrea that supports specifying which egress
+(SNAT) IP the traffic from the selected Pods to the external network should use. When a selected Pod accesses the
+external network, the egress traffic will be tunneled to the Node that hosts the egress IP if it's different from the
+Node that the Pod runs on and will be SNATed to the egress IP when leaving that Node. Refer to
+this [document](egress.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux and "encap"
+mode. The support for Windows and other traffic modes will be added in the future.
+
+### NodeIPAM
+
+`NodeIPAM` runs a Node IPAM Controller similar to the one in Kubernetes that allocates Pod CIDRs for Nodes. Running Node
+IPAM Controller with Antrea is useful in environments where Kubernetes Controller Manager does not run the Node IPAM
+Controller, and Antrea has to handle the CIDR allocation.
+
+#### Requirements for this Feature
+
+This feature requires the Node IPAM Controller to be disabled in Kubernetes Controller Manager. When Antrea and
+Kubernetes both run Node IPAM Controller there is a risk of conflicts in CIDR allocation between the two.
+
+### AntreaIPAM
+
+`AntreaIPAM` feature allocates IP addresses from IPPools. It is required by bridging mode Pods. The bridging mode allows
+flexible control over Pod IP addressing. The desired set of IP ranges, optionally with VLANs, are defined with `IPPool`
+CRD. An IPPool can be annotated to Namespace, Pod and PodTemplate of StatefulSet/Deployment. Then, Antrea will manage IP
+address assignment for corresponding Pods according to `IPPool` spec. On a Node, cross-Node/VLAN traffic of AntreaIPAM
+Pods is sent to the underlay network, and forwarded/routed by the underlay network. For more information, please refer
+to the
+[Antrea IPAM document](antrea-ipam.md#antrea-flexible-ipam).
+
+This feature gate also needs to be enabled to use Antrea for IPAM when configuring secondary network interfaces with
+Multus, in which case Antrea works as an IPAM plugin and allocates IP addresses for Pods' secondary networks, again from
+the configured IPPools of a secondary network. Refer to the
+[secondary network IPAM document](antrea-ipam.md#ipam-for-secondary-network) to learn more information.
+
+#### Requirements for this Feature
+
+Both bridging mode and secondary network IPAM are supported only on Linux Nodes.
+
+The bridging mode works only with `system` OVS datapath type; and `noEncap`,
+`noSNAT` traffic mode. At the moment, it supports only IPv4. The IPs in an IP range without a VLAN must be in the same
+underlay subnet as the Node IPs, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network. IP
+ranges with a VLAN must not overlap with other network subnets, and the underlay network router should provide the
+network connectivity for these VLANs.
+
+### Multicast
+
+The `Multicast` feature enables forwarding multicast traffic within the cluster network (i.e., between Pods) and between
+the external network and the cluster network. Refer to this [document](multicast-guide.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is only supported:
+
+* on Linux Nodes
+* for IPv4 traffic
+* in `noEncap` and `encap` traffic modes
+
+### SecondaryNetwork
+
+The `SecondaryNetwork` feature enables support for provisioning secondary network interfaces for Pods, by annotating
+them appropriately.
+
+More documentation will be coming in the future.
+
+#### Requirements for this Feature
+
+At the moment, Antrea can only create secondary network interfaces using SR-IOV VFs on baremetal Linux Nodes.
+
+### ServiceExternalIP
+
+The `ServiceExternalIP` feature enables a controller which can allocate external IPs for Services with
+type `LoadBalancer`. External IPs are allocated from an
+`ExternalIPPool` resource and each IP gets assigned to a Node selected by the
+`nodeSelector` of the pool automatically. That Node will receive Service traffic destined to that IP and distribute it
+among the backend Endpoints for the Service (through kube-proxy). To enable external IP allocation for a
+`LoadBalancer` Service, you need to annotate the Service with
+`"service.antrea.io/external-ip-pool": ""` and define the appropriate `ExternalIPPool` resource.
+Refer to this [document](service-loadbalancer.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux.
+
+### TrafficControl
+
+`TrafficControl` enables a CRD API for Antrea that controls and manipulates the transmission of Pod traffic. It allows
+users to mirror or redirect traffic originating from specific Pods or destined for specific Pods to a local network
+device or a remote destination via a tunnel of various types. It enables a monitoring solution to get full visibility
+into network traffic, including both north-south and east-west traffic. Refer to this [document](traffic-control.md)
+for more information.
+
+### Multicluster
+
+The `Multicluster` feature gate of Antrea Agent enables [Antrea Multi-cluster Gateways](multicluster/user-guide.md#multi-cluster-gateway-configuration)
+which route Multi-cluster Service and Pod traffic through tunnels across clusters, and support for
+[Multi-cluster NetworkPolicy ingress rules](multicluster/user-guide.md#ingress-rule).
+The `Multicluster` feature gate of Antrea Controller enables support for [Multi-cluster NetworkPolicy](multicluster/user-guide.md#multi-cluster-networkpolicy).
+
+#### Requirements for this Feature
+
+Antrea Multi-cluster Controller must be deployed and the cluster must join a Multi-cluster ClusterSet to configure
+Antrea Multi-cluster features. Refer to [Antrea Multi-cluster user guide](multicluster/user-guide.md) for more
+information about Multi-cluster configuration. At the moment, Antrea Multi-cluster supports only IPv4.
+
+### IPsecCertAuth
+
+This feature enables certificate-based authentication for IPSec tunnel.
+
+### ExternalNode
+
+The `ExternalNode` feature enables Antrea Agent runs on a virtual machine or a bare-metal server which is not a
+Kubernetes Node, and enforces Antrea NetworkPolicy for the VM/BM. Antrea Agent supports the `ExternalNode` feature on
+both Linux and Windows.
+
+Refer to this [document](external-node.md) for more information.
+
+#### Requirements for this Feature
+
+Since Antrea Agent is running on an unmanaged VM/BM when this feature is enabled, features designed for K8s Pods are
+disabled. As of now, this feature requires that `AntreaPolicy` and `NetworkPolicyStats` are also enabled.
+
+OVS is required to be installed on the virtual machine or the bare-metal server before running Antrea Agent, and the OVS
+version must be >= 2.13.0.
+
+### SupportBundleCollection
+
+`SupportBundleCollection` feature enables a CRD API for Antrea to collect support bundle files on any Node or
+ExternalNode, and upload to a user defined file server.
+
+More documentation will be coming in the future.
+
+#### Requirements for this Feature
+
+User should provide a file server with this feature, and store its authentication credential in a Secret. Antrea
+Controller is required to be configured with the permission to read the Secret.
+
+### L7NetworkPolicy
+
+`L7NetworkPolicy` enables users to protect their applications by specifying how they are allowed to communicate with
+others, taking into account application context, providing fine-grained control over the network traffic beyond IP,
+transport protocol, and port. Refer to this [document](antrea-l7-network-policy.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux, and TX checksum offloading must be disabled. Refer to
+this [document](antrea-l7-network-policy.md#prerequisites) for more information and how it can be configured.
+
+### AdminNetworkPolicy
+
+The `AdminNetworkPolicy` API (which currently includes the AdminNetworkPolicy and BaselineAdminNetworkPolicy objects)
+complements the Antrea-native policies and help cluster administrators to set security postures in a portable manner.
+
+### NodeNetworkPolicy
+
+`NodeNetworkPolicy` allows users to apply ClusterNetworkPolicy to Kubernetes Nodes.
+
+#### Requirements for this Feature
+
+This feature is only supported for Linux Nodes at the moment.
+
+### EgressTrafficShaping
+
+The `EgressTrafficShaping` feature gate of Antrea Agent enables traffic shaping of Egress, which could limit the
+bandwidth for all egress traffic belonging to an Egress. Refer to this [document](egress.md#trafficshaping)
+
+#### Requirements for this Feature
+
+This feature leverages OVS meters to do the actual rate-limiting, therefore this feature requires OVS meters
+to be supported in the datapath.
+
+### EgressSeparateSubnet
+
+`EgressSeparateSubnet` allows users to allocate Egress IPs from a different subnet from the default Node subnet.
+Refer to this [document](egress.md#subnetinfo) for more information.
+
+### L7FlowExporter
+
+`L7FlowExporter` enables users to export application-layer flow data using Pod or Namespace annotations.
+Refer to this [document](network-flow-visibility.md#l7-visibility) for more information.
+
+#### Requirements for this Feature
+
+- Linux Nodes only.
+
+### BGPPolicy
+
+`BGPPolicy` allows users to initiate BGP process on selected Kubernetes Nodes and advertise Service IPs (e.g.,
+ClusterIPs, ExternalIPs, LoadBalancerIPs), Pod IPs and Egress IPs to remote BGP peers, providing a flexible mechanism
+for integrating Kubernetes clusters with external BGP-enabled networks.
+
+#### Requirements for this Feature
+
+- Linux Nodes only.
+
+### NodeLatencyMonitor
+
+`NodeLatencyMonitor` enables latency measurements between all pairs of Nodes using ICMP probes,
+which are generated periodically by each Antrea Agent. After enabling the feature gate, you will
+need to create a `NodeLatencyMonitor` Custom Resource named `default`, after which probes will start
+being generated. For example, you can apply the following YAML manifest using kubectl:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: NodeLatencyMonitor
+metadata:
+ name: default
+spec:
+ pingIntervalSeconds: 60
+```
+
+You can adjust `pingIntervalSeconds` to any positive value that suits your needs. To stop latency
+measurements, simply delete the Custom Resource with `kubectl delete nodelatencymonitor/default`.
+
+Latency measurements can be queried using the `NodeLatencyStats` API in `stats.antrea.io/v1alpha1`.
+This can be done with kubectl:
+
+```bash
+> kubectl get nodelatencystats
+NODE NAME NUM LATENCY ENTRIES AVG LATENCY MAX LATENCY
+kind-control-plane 2 7.110553ms 8.94447ms
+kind-worker 2 11.177585ms 11.508751ms
+kind-worker2 2 11.356675ms 15.265629ms
+```
+
+Note that it may take up to one period interval (`pingIntervalSeconds`) for results to become
+visible. Use `kubectl get nodelatencystats -o yaml` or `kubectl get nodelatencystats -o json` to see all the
+individual latency measurements. For example:
+
+```bash
+> kubectl get nodelatencystats/kind-worker -o yaml
+```
+
+```yaml
+apiVersion: stats.antrea.io/v1alpha1
+kind: NodeLatencyStats
+metadata:
+ creationTimestamp: null
+ name: kind-worker
+peerNodeLatencyStats:
+- nodeName: kind-control-plane
+ targetIPLatencyStats:
+ - lastMeasuredRTTNanoseconds: 5837000
+ lastRecvTime: "2024-07-26T22:40:03Z"
+ lastSendTime: "2024-07-26T22:40:33Z"
+ targetIP: 10.10.0.1
+- nodeName: kind-worker2
+ targetIPLatencyStats:
+ - lastMeasuredRTTNanoseconds: 4704000
+ lastRecvTime: "2024-07-26T22:40:03Z"
+ lastSendTime: "2024-07-26T22:40:33Z"
+ targetIP: 10.10.2.1
+```
+
+The feature supports both IPv4 and IPv6. When enabled in a dual-stack cluster, Antrea Agents will
+generate both ICMP and ICMPv6 probes, and report both latency results. In general (except when
+`networkPolicyOnly` mode is used), inter-Node latency will be measured between Antrea gateway
+interfaces. Therefore, in `encap` mode, ICMP probes will traverse the overlay, just like regular
+inter-Node Pod traffic. We believe this gives an accurate representation of the east-west latency
+experienced by Pod traffic.
+
+#### Requirements for this Feature
+
+- Linux Nodes only - the feature has not been tested on Windows Nodes yet.
diff --git a/content/docs/v2.2.0-alpha.2/docs/getting-started.md b/content/docs/v2.2.0-alpha.2/docs/getting-started.md
new file mode 100644
index 00000000..b1a4cc02
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/getting-started.md
@@ -0,0 +1,269 @@
+# Getting Started
+
+Antrea is super easy to install. All the Antrea components are
+containerized and can be installed using the Kubernetes deployment
+manifest.
+
+![antrea-demo](https://user-images.githubusercontent.com/2495809/94325574-e7876500-ff53-11ea-9ecd-6dedef339fac.gif)
+
+## Ensuring requirements are satisfied
+
+### NodeIPAM
+
+Antrea relies on `NodeIPAM` for per-Node CIDR allocation. `NodeIPAM` can run
+within the Kubernetes `kube-controller-manager`, or within the Antrea
+Controller.
+
+#### NodeIPAM within kube-controller-manager
+
+When using `kubeadm` to create the Kubernetes cluster, passing
+`--pod-network-cidr=` to `kubeadm init` will enable
+`NodeIpamController`. Clusters created with kubeadm will always have
+`CNI` plugins enabled. Refer to
+[Creating a cluster with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm)
+for more information about setting up a Kubernetes cluster with `kubeadm`.
+
+When the cluster is deployed by other means then:
+
+* To enable `NodeIpamController`, `kube-controller-manager` should be started
+with the following flags:
+ - `--cluster-cidr=`
+ - `--allocate-node-cidrs=true`
+
+* To enable `CNI` network plugins, `kubelet` should be started with the
+`--network-plugin=cni` flag.
+
+* To enable masquerading of traffic for Service cluster IP via iptables,
+`kube-proxy` should be started with the `--cluster-cidr=`
+flag.
+
+#### NodeIPAM within Antrea Controller
+
+For further info about running NodeIPAM within Antrea Controller, see
+[Antrea IPAM Capabilities](antrea-ipam.md)
+
+### Open vSwitch
+
+As for OVS, when using the built-in kernel module, kernel version >= 4.6 is
+required. On the other hand, when building it from OVS sources, OVS
+version >= 2.6.0 is required.
+
+Red Hat Enterprise Linux and CentOS 7.x use kernel 3.10, but as changes to
+OVS kernel modules are regularly backported to these kernel versions, they
+should work with Antrea, starting with version 7.4.
+
+In case a node does not have a supported OVS module installed,
+you can install it following the instructions at:
+[Installing Open vSwitch](https://docs.openvswitch.org/en/latest/intro/install/).
+Please be aware that the `vport-stt` module is not in the Linux tree and needs to be
+built from source, please build and load it manually before STT tunneling is enabled.
+
+Some experimental features disabled by default may have additional requirements,
+please refer to the [Feature Gates documentation](feature-gates.md) to determine
+whether it applies to you.
+
+Antrea will work out-of-the-box on most popular Operating Systems. Known issues
+encountered when running Antrea on specific OSes are documented
+[here](os-issues.md).
+
+There are also a few network prerequisites which need to be satisfied, and they depend
+on the tunnel mode you choose, please check [network requirements](./network-requirements.md).
+
+## Installation / Upgrade
+
+To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases). For any
+given release `` (e.g. `v0.1.0`), you can deploy Antrea as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+You can use the same `kubectl apply` command to upgrade to a more recent version
+of Antrea.
+
+Antrea supports some experimental features that can be enabled or disabled,
+please refer to the [Feature Gates documentation](feature-gates.md) for more
+information.
+
+### Windows support
+
+If you want to add Windows Nodes to your cluster, please refer to these
+[installation instructions](windows.md).
+
+### ARM support
+
+Starting with v1.0, Antrea supports arm64 and arm/v7 Nodes. The installation
+instructions do not change when some (or all) Linux Nodes in a cluster use an
+ARM architecture: the same deployment YAML can be used, as the
+`antrea/antrea-agent-ubuntu` and `antrea/antrea-controller-ubuntu` Docker images
+are actually manifest lists with support for the amd64, arm64 and arm/v7
+architectures.
+
+Note that while we do run a subset of the Kubernetes conformance tests on both
+the arm/v7 and arm64 Docker images (using [k3s](https://k3s.io/) as the
+Kubernetes distribution), our testing is not as thorough as for the amd64
+image. However, we do not anticipate any issue.
+
+### Install with Helm
+
+Starting with v1.8, Antrea can be installed and updated with Helm. Please refer
+to these [installation instructions](helm.md).
+
+### Deploying Antrea on a Cluster with Existing CNI
+
+The instructions above only apply when deploying Antrea in a new cluster. If you
+need to migrate your existing cluster from another CNI plugin to Antrea, you
+will need to do the following:
+
+* Delete previous CNI, including all resources (K8s objects, iptables rules,
+interfaces, ...) created by that CNI.
+* Deploy Antrea.
+* Restart all Pods in the CNI network in order for Antrea to set-up networking
+for them. This does not apply to Pods which use the Node's network namespace
+(i.e. Pods configured with `hostNetwork: true`). You may use `kubectl drain` to
+drain each Node or reboot all your Nodes.
+
+While this is in-progress, networking will be disrupted in your cluster. After
+deleting the previous CNI, existing Pods may not be reachable anymore.
+
+For example, when migrating from Flannel to Antrea, you will need to do the
+following:
+
+1. Delete Flannel with `kubectl delete -f `.
+2. Delete Flannel bridge and tunnel interface with `ip link delete flannel.1 &&
+ip link delete flannel cni0` **on each Node**.
+3. Ensure [requirements](#ensuring-requirements-are-satisfied) are satisfied.
+4. [Deploy Antrea](#installation--upgrade).
+5. Drain and uncordon Nodes one-by-one. For each Node, run `kubectl drain
+--ignore-daemonsets && kubectl uncordon `. The
+`--ignore-daemonsets` flag will ignore DaemonSet-managed Pods, including the
+Antrea Agent Pods. If you have any other DaemonSet-managed Pods (besides the
+Antrea ones and system ones such as kube-proxy), they will be ignored and will
+not be drained from the Node. Refer to the [Kubernetes
+documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
+for more information. Alternatively, you can also restart all the Pods yourself,
+or simply reboot your Nodes.
+
+To build the image locally, you can follow the instructions in the [Contributor
+Guide](../CONTRIBUTING.md#building-and-testing-your-change).
+
+### Deploying Antrea in Kind
+
+To deploy Antrea in a [Kind](https://github.com/kubernetes-sigs/kind) cluster,
+please refer to this [guide](kind.md).
+
+### Deploying Antrea in Minikube
+
+To deploy Antrea in a [Minikube](https://github.com/kubernetes/minikube) cluster,
+please refer to this [guide](minikube.md).
+
+### Deploying Antrea in Rancher Managed Cluster
+
+To deploy Antrea in a [Rancher](https://github.com/rancher/rancher) managed cluster,
+please refer to this [guide](kubernetes-installers.md#rancher).
+
+### Deploying Antrea in AKS, EKS, and GKE
+
+Antrea can work with cloud managed Kubernetes services, and can be deployed to
+AKS, EKS, and GKE clusters.
+
+* To deploy Antrea to an AKS or an AKS Engine cluster, please refer to [the AKS installation guide](aks-installation.md).
+* To deploy Antrea to an EKS cluster, please refer to [the EKS installation guide](eks-installation.md).
+* To deploy Antrea to a GKE cluster, please refer to [the GKE installation guide](gke-installation.md).
+
+### Deploying Antrea with Custom Certificates
+
+By default, Antrea generates the certificates needed for itself to run. To
+provide your own certificates, please refer to [Securing Control Plane](securing-control-plane.md).
+
+### Antctl: Installation and Usage
+
+To use antctl, the Antrea command-line tool, please refer to [this guide](antctl.md).
+
+## Features
+
+### Antrea Network Policy
+
+Besides Kubernetes NetworkPolicy, Antrea also implements its own Network Policy
+CRDs, which provide advanced features including: policy priority, tiering, deny
+action, external entity, and policy statistics. For more information on usage of
+Antrea Network Policies, refer to the [Antrea Network Policy document](antrea-network-policy.md).
+
+### Egress
+
+Antrea supports specifying which egress (SNAT) IP the traffic from the selected
+Pods to the external network should use and which Node the traffic should leave
+the cluster from. For more information, refer to the [Egress document](egress.md).
+
+### Network Flow Visibility
+
+Antrea supports exporting network flow information using IPFIX, and provides a
+reference cookbook on how to visualize the exported network flows using Elastic
+Stack and Kibana dashboards. For more information, refer to the [network flow
+visibility document](network-flow-visibility.md).
+
+### NoEncap and Hybrid Traffic Modes
+
+Besides the default `Encap` mode, in which Pod traffic across Nodes will be
+encapsulated and sent over tunnels, Antrea also supports `NoEncap` and `Hybrid`
+traffic modes. In `NoEncap` mode, Antrea does not encapsulate Pod traffic, but
+relies on the Node network to route the traffic across Nodes. In `Hybrid` mode,
+Antrea encapsulates Pod traffic when the source Node and the destination Node
+are in different subnets, but does not encapsulate when the source and the
+destination Nodes are in the same subnet. Refer to [this guide](noencap-hybrid-modes.md)
+to learn how to configure Antrea with `NoEncap` or `Hybrid` mode.
+
+### Antrea Web UI
+
+Antrea comes with a web UI, which can show runtime information of Antrea
+components and perform Antrea Traceflow operations. Please refer to the [Antrea
+UI repository](https://github.com/antrea-io/antrea-ui) for installation
+instructions and more information.
+
+### OVS Hardware Offload
+
+Antrea can offload OVS flow processing to the NICs that support OVS kernel
+hardware offload using TC. The hardware offload can improve OVS performance
+significantly. For more information on how to configure OVS offload, refer to
+the [OVS hardware offload guide](ovs-offload.md).
+
+### Prometheus Metrics
+
+Antrea supports exporting metrics to Prometheus. For more information, refer to
+the [Prometheus integration document](prometheus-integration.md).
+
+### Support for Services of type LoadBalancer
+
+By leveraging Antrea's Service external IP management feature or configuring
+MetalLB to work with Antrea, Services of type LoadBalancer can be supported
+without requiring an external LoadBalancer. To learn more information, please
+refer to the [Service LoadBalancer document](service-loadbalancer.md).
+
+### Traceflow
+
+Traceflow is a very useful network diagnosis feature in Antrea. It can trace
+and report the forwarding path of a specified packet in the Antrea network.
+For usage of this feature, refer to the [Traceflow user guide](traceflow-guide.md).
+
+### Traffic Encryption
+
+Antrea supports encrypting traffic between Linux Nodes using IPsec or WireGuard.
+To deploy Antrea with traffic encryption enabled, please refer to [this guide](traffic-encryption.md).
+
+### Antrea Multi-cluster
+
+Antrea Multi-cluster implements Multi-cluster Service API, which allows users to
+create multi-cluster Services that can be accessed cross clusters in a
+ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy
+replication. Multi-cluster admins can define ClusterNetworkPolicies to be
+replicated across the entire ClusterSet, and enforced in all member clusters.
+To learn more information about Antrea Multi-cluster, please refer to the
+[Antrea Multi-cluster user guide](multicluster/user-guide.md).
diff --git a/content/docs/v2.2.0-alpha.2/docs/gke-installation.md b/content/docs/v2.2.0-alpha.2/docs/gke-installation.md
new file mode 100644
index 00000000..ccfc268a
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/gke-installation.md
@@ -0,0 +1,133 @@
+# Deploying Antrea on a GKE cluster
+
+We support running Antrea inside of GKE clusters on Ubuntu Node. Antrea would operate
+in NetworkPolicy only mode, in which no encapsulation is required for any kind of traffic
+(Intra Node, Inter Node, etc) and NetworkPolicies are enforced using OVS. Antrea is supported
+on both VPC-native Enable/Disable modes.
+
+## GKE Prerequisites
+
+1. Install the Google Cloud SDK (gcloud). Refer to [Google Cloud SDK installation guide](https://cloud.google.com/sdk/install)
+
+ ```bash
+ curl https://sdk.cloud.google.com | bash
+ ```
+
+2. Make sure you are authenticated to use the Google Cloud API
+
+ ```bash
+ export ADMIN_USER=user@email.com
+ gcloud auth login
+ ```
+
+3. Create a project or use an existing one
+
+ ```bash
+ export GKE_PROJECT=gke-clusters
+ gcloud projects create $GKE_PROJECT
+ ```
+
+## Creating the cluster
+
+You can use any method to create a GKE cluster (gcloud SDK, gcloud Console, etc). The example
+given here is using the Google Cloud SDK.
+
+**Note:** Antrea is supported on Ubuntu Nodes only for GKE cluster. When creating the cluster, you
+ must use the default network provider and must *not* enable "Dataplane V2".
+
+1. Create a GKE cluster
+
+ ```bash
+ export GKE_ZONE="us-west1"
+ export GKE_HOST="UBUNTU"
+ gcloud container --project $GKE_PROJECT clusters create cluster1 --image-type $GKE_HOST \
+ --zone $GKE_ZONE --enable-ip-alias
+ ```
+
+2. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ gke-cluster1-default-pool-93d7da1c-61z4 Ready 3m11s 1.25.7-gke.1000
+ gke-cluster1-default-pool-93d7da1c-rkbm Ready 3m9s 1.25.7-gke.1000
+ ```
+
+3. Create a cluster-admin ClusterRoleBinding
+
+ ```bash
+ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user user@email.com
+ ```
+
+ **Note:** To create clusterRoleBinding, the user must have `container.clusterRoleBindings.create` permission.
+Use this command to enable it, if the previous command fails due to permission error. Only cluster Admin can
+assign this permission.
+
+ ```bash
+ gcloud projects add-iam-policy-binding $GKE_PROJECT --member user:user@email.com --role roles/container.admin
+ ```
+
+## Deploying Antrea
+
+1. Prepare the Cluster Nodes
+
+ Deploy ``antrea-node-init`` DaemonSet to enable ``kubelet`` to operate in CNI mode.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke-node-init.yml
+ ```
+
+2. Deploy Antrea
+
+ To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that GKE support was added in release 0.5.0, which means you cannot
+pick a release older than 0.5.0. For any given release `` (e.g. `v0.5.0`),
+you can deploy Antrea as follows:
+
+ ```bash
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-gke.yml
+ ```
+
+ To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-gke.yml):
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke.yml
+ ```
+
+ The command will deploy a single replica of Antrea controller to the GKE
+cluster and deploy Antrea agent to every Node. After a successful deployment
+you should be able to see these Pods running in your cluster:
+
+ ```bash
+ $ kubectl get pods --namespace kube-system -l app=antrea -o wide
+ NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+ antrea-agent-24vwr 2/2 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm
+ antrea-agent-7dlcp 2/2 Running 0 46s 10.138.15.206 gke-cluster1-default-pool-9ba12cea-wjzn
+ antrea-controller-5f9985c59-5crt6 1/1 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm
+ ```
+
+3. Restart remaining Pods
+
+ Once Antrea is up and running, restart all Pods in all Namespaces (kube-system, gmp-system, etc) so they can be managed by Antrea.
+
+ ```bash
+ $ for ns in $(kubectl get ns -o=jsonpath=''{.items[*].metadata.name}'' --no-headers=true); do \
+ pods=$(kubectl get pods -n $ns -o custom-columns=NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{ print $1 }'); \
+ [ -z "$pods" ] || kubectl delete pods -n $ns $pods; done
+ pod "alertmanager-0" deleted
+ pod "collector-4sfvd" deleted
+ pod "collector-gtlxf" deleted
+ pod "gmp-operator-67c4678f5c-ffktp" deleted
+ pod "rule-evaluator-85b8bb96dc-trnqj" deleted
+ pod "event-exporter-gke-7bf6c99dcb-4r62c" deleted
+ pod "konnectivity-agent-autoscaler-6dfdb49cf7-hfv9g" deleted
+ pod "konnectivity-agent-cc655669b-2cjc9" deleted
+ pod "konnectivity-agent-cc655669b-d79vf" deleted
+ pod "kube-dns-5bfd847c64-ksllw" deleted
+ pod "kube-dns-5bfd847c64-qv9tq" deleted
+ pod "kube-dns-autoscaler-84b8db4dc7-2pb2b" deleted
+ pod "l7-default-backend-64679d9c86-q69lm" deleted
+ pod "metrics-server-v0.5.2-6bf74b5d5f-22gqq" deleted
+ ```
diff --git a/content/docs/v2.2.0-alpha.2/docs/helm.md b/content/docs/v2.2.0-alpha.2/docs/helm.md
new file mode 100644
index 00000000..f831bf4c
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/helm.md
@@ -0,0 +1,127 @@
+# Installing Antrea with Helm
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Charts](#charts)
+ - [Antrea chart](#antrea-chart)
+ - [Installation](#installation)
+ - [Upgrade](#upgrade)
+ - [An important note on CRDs](#an-important-note-on-crds)
+ - [Flow Aggregator chart](#flow-aggregator-chart)
+ - [Installation](#installation-1)
+ - [Upgrade](#upgrade-1)
+ - [Theia chart](#theia-chart)
+
+
+Starting with Antrea v1.8, Antrea can be installed and updated using
+[Helm](https://helm.sh/).
+
+We provide the following Helm charts:
+
+* `antrea/antrea`: the Antrea network plugin.
+* `antrea/flow-aggregator`: the Antrea Flow Aggregator; see
+ [here](network-flow-visibility.md) for more details.
+* `antrea/theia`: Theia, the Antrea network observability solution; refer to the
+ [Theia](https://github.com/antrea-io/theia) sub-project for more details.
+
+Note that these charts are the same charts that we use to generate the YAML
+manifests for the `kubectl apply` installation method.
+
+## Prerequisites
+
+* Ensure that the necessary
+ [requirements](getting-started.md#ensuring-requirements-are-satisfied) for
+ running Antrea are met.
+* Ensure that Helm 3 is [installed](https://helm.sh/docs/intro/install/). We
+ recommend using a recent version of Helm if possible. Refer to the [Helm
+ documentation](https://helm.sh/docs/topics/version_skew/) for compatibility
+ between Helm and Kubernetes versions.
+* Add the Antrea Helm chart repository:
+
+ ```bash
+ helm repo add antrea https://charts.antrea.io
+ helm repo update
+ ```
+
+## Charts
+
+### Antrea chart
+
+#### Installation
+
+To install the Antrea Helm chart, use the following command:
+
+```bash
+helm install antrea antrea/antrea --namespace kube-system
+```
+
+This will install the latest available version of Antrea. You can also install a
+specific version of Antrea (>= v1.8.0) with `--version `.
+
+#### Upgrade
+
+To upgrade the Antrea Helm chart, use the following commands:
+
+```bash
+# Upgrading CRDs requires an extra step; see explanation below
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-crds.yml
+helm upgrade antrea antrea/antrea --namespace kube-system --version
+```
+
+#### An important note on CRDs
+
+Helm 3 introduces "special treatment" for
+[CRDs](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/),
+with the ability to place CRD definitions (as plain YAML, not templated) in a
+special crds/ directory. When CRDs are defined this way, they will be installed
+before other resources (in case these other resources include CRs corresponding
+to these CRDs). CRDs defined this way will also never be deleted (to avoid
+accidental deletion of user-defined CRs) and will also never be upgraded (in
+case the chart author didn't ensure that the upgrade was
+backwards-compatible). The rationale for all of this is described in details in
+this [Helm community
+document](https://github.com/helm/community/blob/main/hips/hip-0011.md).
+
+Even though Antrea follows a [strict versioning policy](versioning.md), which
+reduces the likelihood of a serious issue when upgrading Antrea, we have decided
+to follow Helm best practices when it comes to CRDs. It means that an extra step
+is required for upgrading the chart:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-crds.yml
+```
+
+When upgrading CRDs in production, it is recommended to make a backup of your
+Custom Resources (CRs) first.
+
+### Flow Aggregator chart
+
+The Flow Aggregator is on the same release schedule as Antrea. Please ensure
+that you use the same released version for the Flow Aggregator chart as for the
+Antrea chart.
+
+#### Installation
+
+To install the Flow Aggregator Helm chart, use the following command:
+
+```bash
+helm install flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --create-namespace
+```
+
+This will install the latest available version of the Flow Aggregator. You can
+also install a specific version (>= v1.8.0) with `--version `.
+
+#### Upgrade
+
+To upgrade the Flow Aggregator Helm chart, use the following command:
+
+```bash
+helm upgrade flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --version
+```
+
+### Theia chart
+
+Refer to the [Theia
+documentation](https://github.com/antrea-io/theia/blob/main/docs/getting-started.md).
diff --git a/content/docs/v2.2.0-alpha.2/docs/kind.md b/content/docs/v2.2.0-alpha.2/docs/kind.md
new file mode 100644
index 00000000..aed38803
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/kind.md
@@ -0,0 +1,187 @@
+# Deploying Antrea on a Kind cluster
+
+
+- [Create a Kind cluster and deploy Antrea in a few seconds](#create-a-kind-cluster-and-deploy-antrea-in-a-few-seconds)
+ - [Using the kind-setup.sh script](#using-the-kind-setupsh-script)
+ - [As an Antrea developer](#as-an-antrea-developer)
+ - [Create a Kind cluster manually](#create-a-kind-cluster-manually)
+ - [Deploy Antrea to your Kind cluster](#deploy-antrea-to-your-kind-cluster)
+ - [Deploy a local build of Antrea to your Kind cluster (for developers)](#deploy-a-local-build-of-antrea-to-your-kind-cluster-for-developers)
+ - [Check that everything is working](#check-that-everything-is-working)
+- [Run the Antrea e2e tests](#run-the-antrea-e2e-tests)
+- [FAQ](#faq)
+ - [Antrea Agents are not starting on macOS, what could it be?](#antrea-agents-are-not-starting-on-macos-what-could-it-be)
+ - [Antrea Agents are not starting on Windows, what could it be?](#antrea-agents-are-not-starting-on-windows-what-could-it-be)
+
+
+We support running Antrea inside of Kind clusters on both Linux and macOS
+hosts.
+
+To deploy a released version of Antrea on an existing Kind cluster, you can
+simply use the same command as for other types of clusters:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+## Create a Kind cluster and deploy Antrea in a few seconds
+
+### Using the kind-setup.sh script
+
+To create a simple two worker Node cluster and deploy a released version of
+Antrea, use:
+
+```bash
+./ci/kind/kind-setup.sh create
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+Or, for the latest version of Antrea, use:
+
+```bash
+./ci/kind/kind-setup.sh create
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+The `kind-setup.sh` script may execute `kubectl` commands to set up the cluster,
+and requires that `kubectl` be present in your `PATH`.
+
+To specify a different number of worker Nodes, use `--num-workers `. To
+specify the IP family of the kind cluster, use `--ip-family `.
+To specify the Kubernetes version of the kind cluster, use
+`--k8s-version `. To specify the Service Cluster IP range, use
+`--service-cidr `.
+
+If you want to pre-load the Antrea image in each Node (to avoid having each Node
+pull from the registry), you can use:
+
+```bash
+tag=
+cluster=
+docker pull antrea/antrea-controller-ubuntu:$tag
+docker pull antrea/antrea-agent-ubuntu:$tag
+./ci/kind/kind-setup.sh \
+ --images "antrea/antrea-controller-ubuntu:$tag antrea/antrea-agent-ubuntu:$tag" \
+ create $cluster
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$tag/antrea.yml
+```
+
+The `kind-setup.sh` is a convenience script typically used by developers for
+testing. For more information on how to create a Kind cluster manually and
+deploy Antrea, read the following sections.
+
+#### As an Antrea developer
+
+If you are an Antrea developer and you need to deploy Antrea with your local
+changes and locally built Antrea image, use:
+
+```bash
+./ci/kind/kind-setup.sh --antrea-cni create
+```
+
+`kind-setup.sh` allows developers to specify the number of worker Nodes, the
+docker bridge networks/subnets connected to the worker Nodes (to test Antrea in
+different encap modes), and a list of docker images to be pre-loaded in each
+Node. For more information on usage, run:
+
+```bash
+./ci/kind/kind-setup.sh help
+```
+
+As a developer, you do usually want to provide the `--antrea-cni` flag, so that
+the `kind-setup.sh` can generate the appropriate Antrea YAML manifest for you on
+the fly, and apply it to the created cluster directly.
+
+### Create a Kind cluster manually
+
+The only requirement is to use a Kind configuration file which disables the
+Kubernetes default CNI (`kubenet`). For example, your configuration file may
+look like this:
+
+```yaml
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+networking:
+ disableDefaultCNI: true
+ podSubnet: 10.10.0.0/16
+nodes:
+- role: control-plane
+- role: worker
+- role: worker
+```
+
+Once you have created your configuration file (let's call it `kind-config.yml`),
+create your cluster with:
+
+```bash
+kind create cluster --config kind-config.yml
+```
+
+### Deploy Antrea to your Kind cluster
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+### Deploy a local build of Antrea to your Kind cluster (for developers)
+
+These instructions assume that you have built the Antrea Docker image locally
+(e.g. by running `make` from the root of the repository).
+
+```bash
+# load the Antrea Docker images in the Nodes
+kind load docker-image antrea/antrea-controller-ubuntu:latest antrea/antrea-agent-ubuntu:latest
+# deploy Antrea
+kubectl apply -f build/yamls/antrea.yml
+```
+
+### Check that everything is working
+
+After a few seconds you should be able to observe the following when running
+`kubectl get -n kube-system pods -l app=antrea`:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+antrea-agent-dgsfs 2/2 Running 0 8m56s
+antrea-agent-nzsmx 2/2 Running 0 8m56s
+antrea-agent-zsztq 2/2 Running 0 8m56s
+antrea-controller-775f4d79f8-6tksp 1/1 Running 0 8m56s
+```
+
+## Run the Antrea e2e tests
+
+To run the Antrea e2e test suite on your Kind cluster, please refer to [this
+document](https://github.com/antrea-io/antrea/blob/main/test/e2e/README.md#running-the-e2e-tests-on-a-kind-cluster).
+
+## FAQ
+
+### Antrea Agents are not starting on macOS, what could it be?
+
+Some older versions of Docker Desktop did not include all the required Kernel
+modules to run the Antrea Agent, and in particular the `openvswitch` Kernel
+module. See [this issue](https://github.com/docker/for-mac/issues/4660) for more
+information. This issue does not exist with recent Docker Desktop versions (`>=
+2.5`).
+
+### Antrea Agents are not starting on Windows, what could it be?
+
+At this time, we do not officially support Antrea for Kind clusters running on
+Windows hosts. In recent Docker Desktop versions, the default way of running
+Linux containers on Windows is by using the [Docker Desktop WSL 2
+backend](https://docs.docker.com/desktop/windows/wsl/). However, the Linux
+Kernel used by default in WSL 2 does not include all the required Kernel modules
+to run the Antrea Agent, and in particular the `openvswitch` Kernel
+module. There are 2 different ways to work around this issue, which we will not
+detail in this document:
+
+* use the Hyper-V backend for Docker Desktop
+* build a custom Kernel for WSL, with the required Kernel configuration:
+
+ ```text
+ CONFIG_NETFILTER_XT_MATCH_RECENT=y
+ CONFIG_NETFILTER_XT_TARGET_CT=y
+ CONFIG_OPENVSWITCH=y
+ CONFIG_OPENVSWITCH_GRE=y
+ CONFIG_OPENVSWITCH_VXLAN=y
+ CONFIG_OPENVSWITCH_GENEVE=y
+ ```
diff --git a/content/docs/v2.2.0-alpha.2/docs/kubernetes-installers.md b/content/docs/v2.2.0-alpha.2/docs/kubernetes-installers.md
new file mode 100644
index 00000000..76c6ac07
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/kubernetes-installers.md
@@ -0,0 +1,148 @@
+# K8s Installers and Distributions
+
+## Tested installers and distributions
+
+The table below is not comprehensive. Antrea should work with most K8s
+installers and distributions. The table refers to specific version combinations
+which are known to work and have been tested, but support is not limited to that
+list. Each Antrea version supports [multiple K8s minor versions](versioning.md#supported-k8s-versions),
+and installers / distributions based on any one of these K8s versions should
+work with that Antrea version.
+
+| Antrea Version | Installer / Distribution | Cloud Infra | Node Info | Node Size | Conformance Results | Comments |
+|-|-|-|-|-|-|-|
+| v1.0.0 | Kubeadm v1.21.0 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) amd64, docker://20.10.6 | t3.medium | | |
+| - | - | - | Windows Server 2019 Datacenter (10.0.17763.1817), docker://19.3.14 | t3.medium | | |
+| - | - | - | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) arm64, docker://20.10.6 | t3.medium | | |
+| - | Cluster API Provider vSphere (CAPV), K8s 1.19.1 | VMC on AWS, vSphere 7.0.1 | Ubuntu 18.04, containerd | 2 vCPUs, 8GB RAM | | Antrea CI |
+| - | K3s v1.19.8+k3s1 | [OSUOSL] | Ubuntu 20.04.1 LTS (5.4.0-66-generic) arm64, containerd://1.4.3-k3s3 | 2 vCPUs, 4GB RAM | | Antrea CI, cluster installed with [k3sup] 0.9.13 |
+| - | Kops v1.20, K8s v1.20.5 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1041-aws) amd64, containerd://1.4.4 | t3.medium | [results tarball](http://downloads.antrea.io/artifacts/sonobuoy-conformance/kops_202104212218_sonobuoy_bf0f8e77-c9df-472a-85e2-65e456cf4d83.tar.gz) | |
+| - | EKS, K8s v1.17.12 | AWS | AmazonLinux2, docker | t3.medium | | Antrea CI |
+| - | GKE, K8s v1.19.8-gke.1600 | GCP | Ubuntu 18.04, docker | e2-standard-4 | | Antrea CI |
+| - | AKS, K8s v1.18.14 | Azure | Ubuntu 18.04, moby | Standard_DS2_v2 | | Antrea CI |
+| - | AKS, K8s v1.19.9 | Azure | Ubuntu 18.04, containerd | Standard_DS2_v2 | | Antrea CI |
+| - | Kind v0.9.0, K8s v1.19.1 | N/A | Ubuntu 20.10, containerd://1.4.0 | N/A | | [Requirements for using Antrea on Kind](kind.md) |
+| - | Minikube v1.25.0 | N/A | Ubuntu 20.04.2 LTS (5.10.76-linuxkit) arm64, docker://20.10.12 | 8GB RAM | | |
+| v1.11.0 | Kubeadm v1.20.2 | N/A | openEuler 22.03 LTS, docker://18.09.0 | 10GB RAM | | |
+| v1.11.0 | Kubeadm v1.25.5 | N/A | openEuler 22.03 LTS, containerd://1.6.18 | 10GB RAM | | |
+| v1.15.0 | Talos v1.5.5 | Docker provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea v1.15 or above |
+| - | - | QEMU provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea v1.15 or above |
+| v2.0 | Rancher v2.7.0, RKE2, K8s v1.24.10 | vSphere | Ubuntu 22.04.1 LTS (5.15.0-57-generic) amd64, docker://20.10.21 | 4 vCPUs, 4GB RAM | | Antrea CI |
+
+## Installer-specific instructions
+
+### Kubeadm
+
+When running `kubeadm init` to create a cluster, you need to provide a range of
+IP addresses for the Pod network using `--pod-network-cidr`. By default, a /24
+subnet will be allocated out of the CIDR to every Node which joins the cluster,
+so make sure you use a large enough CIDR to accommodate the number of Nodes you
+want. Once the cluster has been created, this CIDR cannot be changed.
+
+### Rancher
+
+Follow these steps to deploy Antrea (as a [custom CNI](https://rke.docs.rancher.com/config-options/add-ons/network-plugins/custom-network-plugin-example))
+on [Rancher](https://ranchermanager.docs.rancher.com/pages-for-subheaders/kubernetes-clusters-in-rancher-setup) cluster:
+
+* Edit the cluster YAML and set the `network-plugin` option to none.
+
+* Add an addon for Antrea, in the following manner:
+
+ ```yaml
+ addons_include:
+ -
+ ```
+
+### K3s
+
+When creating a cluster, run K3s with the following options:
+
+* `--flannel-backend=none`, which lets you run the [CNI of your
+ choice](https://rancher.com/docs/k3s/latest/en/installation/network-options/)
+* `--disable-network-policy`, to disable the K3s NetworkPolicy controller
+
+### Kops
+
+When creating a cluster, run Kops with `--networking cni`, to enable CNI for the
+cluster without deploying a specific network plugin.
+
+### Kind
+
+To deploy Antrea on Kind, please follow these [steps](kind.md).
+
+### Minikube
+
+To deploy Antrea on minikube, please follow these [steps](minikube.md).
+
+### Talos
+
+[Talos](https://www.talos.dev/) is a Linux distribution designed for running
+Kubernetes. Antrea can be used as the CNI on Talos clusters (tested with both
+the Docker provisioner and the QEMU provisioner). However, because of some
+built-in security settings in Talos, the default configuration values cannot be
+used when installing Antrea. You will need to install Antrea using Helm, with a
+few custom values. Antrea v1.15 or above is required.
+
+Follow these steps to deploy Antrea on a Talos cluster:
+
+* Make sure that your Talos cluster is created without a CNI. To ensure this,
+ you can use a config patch. For example, to create a Talos cluster without a
+ CNI, using the Docker provisioner:
+
+ ```bash
+ cat << EOF > ./patch.yaml
+ cluster:
+ network:
+ cni:
+ name: none
+ EOF
+
+ talosctl cluster create --config-patch=@patch.yaml --wait=false --workers 2
+ ```
+
+ Notice how we use `--wait=false`: the cluster will never be "ready" until a
+ CNI is installed.
+
+ Note that while we use the Docker provisioner here, you can use the Talos
+ platform of your choice.
+
+* Ensure that you retrieve the Kubeconfig for your new cluster once it is
+ available. You may need to use the `talosctl kubeconfig` command for this.
+
+* Install Antrea using Helm, with the appropriate values:
+
+ ```bash
+ cat << EOF > ./values.yaml
+ agent:
+ dontLoadKernelModules: true
+ installCNI:
+ securityContext:
+ capabilities: []
+ EOF
+
+ helm install -n kube-system antrea -f value.yml antrea/antrea
+ ```
+
+ The above configuration will drop all capabilities from the `installCNI`
+ container, and instruct the Antrea Agent not to try loading any Kernel module
+ explicitly.
+
+## Updating the list
+
+You can [open a Pull Request](../CONTRIBUTING.md) to:
+
+* Add a new K8s installer or distribution to the table above.
+* Add a new combination of versions that you have tested successfully to the
+ table above.
+
+Please make sure that you run conformance tests with [sonobuoy] and consider
+uploading the test results to a publicly accessible location. You can run
+sonobuoy with:
+
+```bash
+sonobuoy run --mode certified-conformance
+```
+
+[k3sup]: https://github.com/alexellis/k3sup
+[OSUOSL]: https://osuosl.org/services/aarch64/
+[sonobuoy]: https://github.com/vmware-tanzu/sonobuoy
diff --git a/content/docs/v2.2.0-alpha.2/docs/maintainers/antrea-docker-image.md b/content/docs/v2.2.0-alpha.2/docs/maintainers/antrea-docker-image.md
new file mode 100644
index 00000000..d6ceac65
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/maintainers/antrea-docker-image.md
@@ -0,0 +1,44 @@
+# Antrea Docker image
+
+The main Antrea Docker images (`antrea/antrea-agent-ubuntu` and
+`antrea/antrea-controller-ubuntu`) are multi-arch images. For example, the
+`antrea/antrea-agent-ubuntu` manifest is a list of three manifests:
+`antrea/antrea-agent-ubuntu-amd64`, `antrea/antrea-agent-ubuntu-arm64` and
+`antrea/antrea-agent-ubuntu-arm`. Of these three manifests, only the first one
+is built and uploaded to Dockerhub by Github workflows defined in the
+`antrea-io/antrea` repositories. The other two are built and uploaded by Github
+workflows defined in a private repository (`vmware-tanzu/antrea-build-infra`),
+to which only the project maintainers have access. These workflows are triggered
+every time the `main` branch of `antrea-io/antrea` is updated, as well as every
+time a new Antrea Github release is created. They build the
+`antrea/antrea-agent-ubuntu-arm64` and `antrea/antrea-agent-ubuntu-arm` Docker
+images on native arm64 workers, then create the `antrea/antrea-agent-ubuntu`
+multi-arch manifest and push it to Dockerhub. The same goes for the controller
+images. They are also in charge of testing the images in a
+[K3s](https://github.com/k3s-io/k3s) cluster.
+
+## Why do we use a private repository?
+
+The `vmware-tanzu/antrea-build-infra` repository uses self-hosted ARM64 workers
+provided by the [Open Source Lab](https://osuosl.org/services/aarch64/) at
+Oregon State University. These workers enable us to build, and more importantly
+*test*, the Antrea Docker images for the arm64 and arm/v7 architectures. Being
+able to build Docker images on native ARM platforms is convenient as it is much
+faster than emulation. But if we just wanted to build the images, emulation
+would probably be good enough. However, testing Kubernetes ARM support using
+emulation is no piece of cake. Which is why we prefer to use native ARM64
+workers.
+
+Github strongly
+[recommends](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)
+not to use self-hosted runners with public repositories, for security
+reasons. It would be too easy for a malicious person to run arbitrary code on
+the runners by opening a pull request. Were we to make this repository public,
+we would therefore at least need to disable pull requests, which is sub-optimal
+for a public repository. We believe Github will address the issue eventually and
+provide safeguards to enable using self-hosted runners with public
+repositories, at which point we will migrate workflows from this repository to
+the main Antrea repository.
+
+In the future, we may switch over to ARM hosted Github runners provided by the
+CNCF.
diff --git a/content/docs/v2.2.0-alpha.2/docs/maintainers/build-kubemark.md b/content/docs/v2.2.0-alpha.2/docs/maintainers/build-kubemark.md
new file mode 100644
index 00000000..c31fccfb
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/maintainers/build-kubemark.md
@@ -0,0 +1,13 @@
+# Build the kubemark image
+
+This documentation simply describes how to build the kubemark image used in
+[Antrea scale testing](../antrea-agent-simulator.md)
+
+```bash
+cd $KUBERNETES_PATH
+git checkout v1.29.0
+make WHAT=cmd/kubemark KUBE_BUILD_PLATFORMS=linux/amd64
+cp ./_output/local/bin/linux/amd64/kubemark cluster/images/kubemark
+cd cluster/images/kubemark
+docker build -t antrea/kubemark:v1.29.0 .
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/maintainers/getting-started-gif.md b/content/docs/v2.2.0-alpha.2/docs/maintainers/getting-started-gif.md
new file mode 100644
index 00000000..4e5f78fc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/maintainers/getting-started-gif.md
@@ -0,0 +1,12 @@
+# Getting-started GIF
+
+To refresh the gif image included in
+[getting-started.md](../getting-started.md), follow these steps:
+
+* install [asciinema](https://asciinema.org/)
+* set `PS1="> "` in your bash profile file (e.g. `.bashrc`, `zshrc`, ...) to simplify the prompt
+* record the cast with the correct shell, e.g. `SHELL=zsh asciinema rec my.cast`
+* convert the cast file to a gif file: `docker run --rm -v $PWD:/data -w /data asciinema/asciicast2gif -s 3 -w 120 -h 20 my.cast my.gif`
+* upload the gif file to Github's CDN by following these
+ [instructions](https://gist.github.com/vinkla/dca76249ba6b73c5dd66a4e986df4c8d)
+* update the link in [getting-started.md](../getting-started.md) by opening a PR
diff --git a/content/docs/v2.2.0-alpha.2/docs/maintainers/release.md b/content/docs/v2.2.0-alpha.2/docs/maintainers/release.md
new file mode 100644
index 00000000..61b9e17e
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/maintainers/release.md
@@ -0,0 +1,93 @@
+# Antrea Release Process
+
+This file documents the list of steps to perform to create a new Antrea
+release. We use `` as a placeholder for the release tag (e.g. `v1.4.0`).
+
+1. *For a minor release* On the code freeze date (typically one week before the
+ actual scheduled release date), create a release branch for the new minor
+ release (e.g `release-1.4`).
+ - after that time, only bug fixes should be merged into the release branch,
+ by [cherry-picking](../contributors/cherry-picks.md) the fix after it has
+ been merged into main. The maintainer in charge of that specific minor
+ release can either do the cherry-picking directly or ask the person who
+ contributed the fix to do it.
+
+2. Open a PR (labelled with `kind/release`) against the appropriate release
+ branch with the following commits:
+ - a commit to update the [CHANGELOG](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/CHANGELOG). *For a minor release*,
+ all significant changes and all bug fixes (labelled with
+ `action/release-note` since the first version of the previous minor release
+ should be mentioned, even bug fixes which have already been included in
+ some patch release. *For a patch release*, you will mention all the bug
+ fixes since the previous release with the same minor version. The commit
+ message must be *exactly* `"Update CHANGELOG for release"`, as a bot
+ will look for this commit and cherry-pick it to update the main branch
+ (starting with Antrea v1.0). The
+ [prepare-changelog.sh](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/release/prepare-changelog.sh) script may
+ be used to easily generate links to PRs and the Github profiles of PR
+ authors. Use `prepare-changelog.sh -h` to get the usage.
+ - a commit to update [VERSION](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/VERSION) as needed, using the following
+ commit message: `"Set VERSION to "`. Before committing, ensure that
+ you run `make -C build/charts/ helm-docs` and include the changes.
+
+3. Run all the tests for the PR, investigating test failures and re-triggering
+ the tests as needed.
+ - Github worfklows are run automatically whenever the head branch is updated.
+ - Jenkins tests need to be [triggered manually](../../CONTRIBUTING.md#getting-your-pr-verified-by-ci).
+ - Cloud tests need to be triggered manually through the
+ [Jenkins web UI](https://jenkins.antrea.io/). Admin access is
+ required. For each job (AKS, EKS, GKE), click on `Build with Parameters`,
+ and enter the name of your fork as `ANTREA_REPO` and the name of your
+ branch as `ANTREA_GIT_REVISION`. Test starting times need to be staggered:
+ if multiple jobs run at the same time, the Jenkins worker may run
+ out-of-memory.
+
+4. Request a review from the other maintainers, and anyone else who may need to
+ review the release notes. In case of feedback, you may want to consider
+ waiting for all the tests to succeed before updating your PR. Once all the
+ tests have run successfully once, address review comments, get approval for
+ your PR, and merge.
+ - this is the only case for which the "Rebase and merge" option should be
+ used instead of the "Squash and merge" option. This is important, in order
+ to ensure that changes to the CHANGELOG are preserved as an individual
+ commit. You will need to enable the "Allow rebase merging" setting in the
+ repository settings temporarily, and remember to disable it again right
+ after you merge.
+
+5. Make the release on Github **with the release branch as the target** and copy
+ the relevant section of the CHANGELOG as the release description (make sure
+ all the markdown links work). The
+ [draft-release.sh](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/hack/release/draft-release.sh) script can
+ be used to create the release draft. Use `draft-release.sh -h` to get the
+ usage. You typically should **not** be checking the `Set as a pre-release`
+ box. This would only be necessary for a release candidate (e.g., `` is
+ `1.4.0-rc.1`), which we do not have at the moment. There is no need to upload
+ any assets as this will be done automatically by a Github workflow, after you
+ create the release.
+ - the `Set as the latest release` box is checked by default. **If you are
+ creating a patch release for an older minor version of Antrea, you should
+ uncheck the box.**
+
+6. After a while (time for the relevant Github workflows to complete), check that:
+ - the Docker image has been pushed to
+ [dockerhub](https://hub.docker.com/u/antrea) with the correct tag. This is
+ handled by a Github worfklow defined in a separate Github repository and it
+ can take some time for this workflow to complete. See this
+ [document](antrea-docker-image.md) for more information.
+ - the assets have been uploaded to the release (`antctl` binaries and yaml
+ manifests). This is handled by the `Upload assets to release` workflow. In
+ particular, the following link should work:
+ `https://github.com/antrea-io/antrea/releases/download//antrea.yml`.
+
+7. After the appropriate Github workflow completes, a bot will automatically
+ submit a PR to update the CHANGELOG in the main branch. You should verify the
+ contents of the PR and merge it (no need to run the tests, use admin
+ privileges).
+
+8. *For a minor release* Finally, open a PR against the main branch with a
+ single commit, to update [VERSION](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/VERSION) to the next minor version
+ (with `-dev` suffix). For example, if the release was for `v1.4.0`, the
+ VERSION file should be updated to `v1.5.0-dev`. Before committing, ensure
+ that you run `make -C build/charts/ helm-docs` and include the changes. Note
+ that after a patch release, the VERSION file in the main branch is never
+ updated, so no additional commit is needed.
diff --git a/content/docs/v2.2.0-alpha.2/docs/maintainers/updating-ovs-windows.md b/content/docs/v2.2.0-alpha.2/docs/maintainers/updating-ovs-windows.md
new file mode 100644
index 00000000..f42612f5
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/maintainers/updating-ovs-windows.md
@@ -0,0 +1,37 @@
+# Updating the OVS Windows Binaries
+
+Antrea ships a zip archive with OVS binaries for Windows. The binaries are
+hosted on the antrea.io website and updated as needed. This file documents the
+procedure to upload a new version of the OVS binaries. The archive is served
+from AWS S3, and therefore access to the Antrea S3 account is required for this
+procedure.
+
+* We assume that you have already built the OVS binaries (if a custom built is
+ required), or retrieved them from the official OVS build pipelines. The
+ binaries must be built in **Release** mode for acceptable performance.
+
+* Name the zip archive appropriately:
+ `ovs-[-antrea.]-win64.zip`
+ - the format for `` is `..`, with no `v`
+ prefix.
+ - the `-antrea.` component is optional but must be provided if this
+ is not the official build for the referenced OVS version. ``
+ starts at 1 and is incremented for every new upload corresponding to that
+ OVS version.
+
+* Generate the SHA256 checksum for the archive.
+ - place yourself in the directory containing the archive.
+ - run `sha256sum -b .zip > .zip.sha256`, where `` is
+ determined by the previous step.
+
+* Upload the archive and SHA256 checksum file to the `ovs/` folder in the
+ `downloads.antrea.io` S3 bucket. As you upload the files, grant public read
+ access to them (you can also do it after the upload with the `Make public`
+ action).
+
+* Validate both public links:
+ - `https://downloads.antrea.io/ovs/.zip`
+ - `https://downloads.antrea.io/ovs/.zip.sha256`
+
+* Update the Antrea Windows documentation and helper scripts as needed,
+ e.g. `hack/windows/Install-OVS.ps1`.
diff --git a/content/docs/v2.2.0-alpha.2/docs/migrate-to-antrea.md b/content/docs/v2.2.0-alpha.2/docs/migrate-to-antrea.md
new file mode 100644
index 00000000..1eb4d2bc
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/migrate-to-antrea.md
@@ -0,0 +1,74 @@
+# Migrate from another CNI to Antrea
+
+This document provides guidance on migrating from other CNIs to Antrea
+starting from version v1.15.0 onwards.
+
+NOTE: The following is a reference list of CNIs and versions for which we have
+verified the migration process. CNIs and versions that are not listed here
+might also work. Please create an issue if you run into problems during the
+migration to Antrea. During the migration process, no Kubernetes resources
+should be created or deleted, otherwise the migration process might fail or
+some unexpected problems might occur.
+
+| CNI | Version |
+|---------|---------|
+| Calico | v3.26 |
+| Flannel | v0.22.0 |
+
+The migration process is divided into three steps:
+
+1. Clean up the old CNI.
+2. Install Antrea in the cluster.
+3. Deploy Antrea migrator.
+
+## Clean up the old CNI
+
+The cleanup process varies across CNIs, typically you should remove
+the DaemonSet, Deployment, and CRDs of the old CNI from the cluster.
+For example, if you used `kubectl apply -f ` to install
+the old CNI, you could then use `kubectl delete -f ` to
+uninstall it.
+
+## Install Antrea
+
+The second step is to install Antrea in the cluster. You can follow the
+[installation guide](https://github.com/antrea-io/antrea/blob/main/docs/getting-started.md)
+to install Antrea. The following is an example of installing Antrea v1.14.1:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.14.1/antrea.yml
+```
+
+## Deploy Antrea migrator
+
+After Antrea is up and running, you can now deploy Antrea migrator
+by the following command. The migrator runs as a DaemonSet, `antrea-migrator`,
+in the cluster, which will restart all non hostNetwork Pods in the cluster
+in-place and perform necessary network resource cleanup.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml
+```
+
+The reason for restarting all Pods is that Antrea needs to take over the
+network management and IPAM from the old CNI. In order to avoid the Pods
+being rescheduled and minimize service downtime, the migrator restarts
+all non-hostNetwork Pods in-place by restarting their sandbox containers.
+Therefore, it's expected to see the `RESTARTS` count for these Pods being
+increased by 1 like below:
+
+```bash
+$ kubectl get pod -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+migrate-example-6d6b97f96b-29qbq 1/1 Running 1 (24s ago) 2m5s 10.10.1.3 test-worker
+migrate-example-6d6b97f96b-dqx2g 1/1 Running 1 (23s ago) 2m5s 10.10.1.6 test-worker
+migrate-example-6d6b97f96b-jpflg 1/1 Running 1 (23s ago) 2m5s 10.10.1.5 test-worker
+```
+
+When the `antrea-migrator` Pods on all Nodes are in `Running` state,
+the migration process is completed. You can then remove the `antrea-migrator`
+DaemonSet safely with the following command:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/minikube.md b/content/docs/v2.2.0-alpha.2/docs/minikube.md
new file mode 100644
index 00000000..5cffec44
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/minikube.md
@@ -0,0 +1,48 @@
+# Deploying Antrea on Minikube
+
+
+- [Install Minikube](#install-minikube)
+- [Deploy Antrea](#deploy-antrea)
+ - [Deploy Antrea to Minikube cluster](#deploy-antrea-to-minikube-cluster)
+ - [Deploy a local build of Antrea to Minikube cluster (for developers)](#deploy-a-local-build-of-antrea-to-minikube-cluster-for-developers)
+- [Verification](#verification)
+
+
+## Install Minikube
+
+Follow these [steps](https://minikube.sigs.k8s.io/docs/start) to install minikube and set its development environment.
+
+## Deploy Antrea
+
+### Deploy Antrea to Minikube cluster
+
+```bash
+# curl is required because --cni flag does not accept URL as a parameter
+curl -Lo https://github.com/antrea-io/antrea/releases/download//antrea.yml
+minikube start --cni=antrea.yml --network-plugin=cni
+```
+
+### Deploy a local build of Antrea to Minikube cluster (for developers)
+
+These instructions assume that you have built the Antrea Docker image locally
+(e.g. by running `make` from the root of the repository, or in case of arm64 architecture by running
+`./hack/build-antrea-linux-all.sh --platform linux/arm64`).
+
+```bash
+# load the Antrea Docker images in the minikube nodes
+minikube image load antrea/antrea-controller-ubuntu:latest
+minikube image load antrea/antrea-agent-ubuntu:latest
+# deploy Antrea
+kubectl apply -f antrea/build/yamls/antrea.yml
+```
+
+## Verification
+
+After a few seconds you should be able to observe the following when running
+`kubectl get pods -l app=antrea -n kube-system`:
+
+```txt
+NAME READY STATUS RESTARTS AGE
+antrea-agent-9ftn9 2/2 Running 0 66m
+antrea-controller-56f97bbcff-zbfmv 1/1 Running 0 66m
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicast-guide.md b/content/docs/v2.2.0-alpha.2/docs/multicast-guide.md
new file mode 100644
index 00000000..2e53ba6d
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicast-guide.md
@@ -0,0 +1,187 @@
+# Multicast User Guide
+
+Antrea supports multicast traffic in the following scenarios:
+
+1. Pod to Pod - a Pod that has joined a multicast group will receive the
+ multicast traffic to that group from the Pod senders.
+2. Pod to External - external hosts can receive the multicast traffic sent
+ from Pods, when the Node network supports multicast forwarding / routing to
+ the external hosts.
+3. External to Pod - Pods can receive the multicast traffic from external
+ hosts.
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Multicast NetworkPolicy](#multicast-networkpolicy)
+- [Debugging and collecting multicast statistics](#debugging-and-collecting-multicast-statistics)
+ - [Pod multicast group information](#pod-multicast-group-information)
+ - [Inbound and outbound multicast traffic statistics](#inbound-and-outbound-multicast-traffic-statistics)
+ - [Multicast NetworkPolicy statistics](#multicast-networkpolicy-statistics)
+- [Use case example](#use-case-example)
+- [Limitations](#limitations)
+ - [Encap mode](#encap-mode)
+ - [Maximum number of receiver groups on one Node](#maximum-number-of-receiver-groups-on-one-node)
+ - [Traffic in local network control block](#traffic-in-local-network-control-block)
+ - [Linux kernel](#linux-kernel)
+ - [Antrea FlexibleIPAM](#antrea-flexibleipam)
+
+
+## Prerequisites
+
+Multicast support was introduced in Antrea v1.5.0 as an alpha feature, and was
+graduated to beta in v1.12.0.
+
+* Prior to v1.12.0, a feature gate, `Multicast` must be enabled in the
+ `antrea-controller` and `antrea-agent` configuration to use the feature.
+* Starting from v1.12.0, the feature gate is enabled by default, you need to set
+ the `multicast.enable` flag to true in the `antrea-agent` configuration to use
+ the feature.
+
+There are three other configuration options -`multicastInterfaces`,
+`igmpQueryVersions`, and `igmpQueryInterval` for `antrea-agent`.
+
+```yaml
+ antrea-agent.conf: |
+ multicast:
+ enable: true
+ # The names of the interfaces on Nodes that are used to forward multicast traffic.
+ # Defaults to transport interface if not set.
+ multicastInterfaces:
+ # The versions of IGMP queries antrea-agent sends to Pods.
+ # Valid versions are 1, 2 and 3.
+ igmpQueryVersions:
+ - 1
+ - 2
+ - 3
+ # The interval at which the antrea-agent sends IGMP queries to Pods.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ igmpQueryInterval: "125s"
+```
+
+## Multicast NetworkPolicy
+
+Antrea NetworkPolicy and Antrea ClusterNetworkPolicy are supported for the
+following types of multicast traffic:
+
+1. IGMP egress rules: applied to IGMP membership report and IGMP leave group
+ messages.
+2. IGMP ingress rules: applied to IGMP query, which includes IGMPv1, IGMPv2, and
+ IGMPv3.
+3. Multicast egress rules: applied to non-IGMP multicast traffic from the
+ selected Pods to other Pods or external hosts.
+
+Note, multicast ingress rules are not supported at the moment.
+
+Examples: You can refer to the [ACNP for IGMP traffic](antrea-network-policy.md#acnp-for-igmp-traffic)
+and [ACNP for multicast egress traffic](antrea-network-policy.md#acnp-for-multicast-egress-traffic)
+examples in the Antrea NetworkPolicy document.
+
+## Debugging and collecting multicast statistics
+
+Antrea provides tooling to check multicast group information and multicast
+traffic statistics.
+
+### Pod multicast group information
+
+The `kubectl get multicastgroups` command prints multicast groups joined by Pods
+in the cluster. Example output of the command:
+
+```bash
+$ kubectl get multicastgroups
+GROUP PODS
+225.1.2.3 default/mcjoin, namespace/pod
+224.5.6.4 default/mcjoin
+```
+
+### Inbound and outbound multicast traffic statistics
+
+`antctl` supports printing multicast traffic statistics of Pods. Please refer to
+the corresponding [antctl user guide section](antctl.md#multicast-commands).
+
+### Multicast NetworkPolicy statistics
+
+The [Antrea NetworkPolicyStats feature](feature-gates.md#networkpolicystats)
+also supports multicast NetworkPolices.
+
+## Use case example
+
+This section will take multicast video streaming as an example to demonstrate
+how multicast works with Antrea. In this example,
+[VLC](https://www.videolan.org/vlc/) multimedia tools are used to generate and
+consume multicast video streams.
+
+To start a video streaming server, we start a VLC Pod to stream a sample video
+to the multicast IP address `239.255.12.42` with TTL 6.
+
+```bash
+kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-sender -- --intf ncurses --vout dummy --aout dummy 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/26/Bees_on_flowers.webm/Bees_on_flowers.webm.120p.vp9.webm' --sout udp:239.255.12.42 --ttl 6 --repeat
+```
+
+You can verify multicast traffic is sent out from this Pod by running
+`antctl get podmulticaststats` in the `antrea-agent` Pod on the local Node,
+which indicates the VLC Pod is sending out multicast video streams.
+
+You can also check the multicast routes on the Node by running command
+`ip mroute`, which should print the following route for forwarding the multicast
+traffic from the Antrea gateway interface to the transport interface.
+
+```bash
+$ ip mroute
+(, 239.255.12.42) Iif: antrea-gw0 Oifs: State: resolved
+```
+
+We also create a VLC Pod to be the receiver with the following command:
+
+```bash
+kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-receiver -- --intf ncurses --vout dummy --aout dummy udp://@239.255.12.42 --repeat
+```
+
+It's expected to see inbound multicast traffic to this Pod by running
+`antctl get podmulticaststats` in the local `antrea-agent` Pod,
+which indicates the VLC Pod is receiving the video stream.
+
+Also, the `kubectl get multicastgroups` command will show that `vlc-receiver`
+has joined multicast group `239.255.12.42`.
+
+## Limitations
+
+This feature is currently supported only for IPv4 Linux clusters. Support for
+Windows and IPv6 will be added in the future.
+
+### Encap mode
+
+The configuration option `multicastInterfaces` is not supported with encap mode.
+Multicast packets in encap mode are SNATed and forwarded to the transport
+interface only.
+
+### Maximum number of receiver groups on one Node
+
+A Linux host limits the maximum number of multicast groups it can subscribe to;
+the default number is 20. The limit can be changed by setting [/proc/sys/net/ipv4/igmp_max_memberships](https://sysctl-explorer.net/net/ipv4/igmp_max_memberships/).
+Users are responsible for changing the limit if Pods on the Node are expected to
+join more than 20 groups.
+
+### Traffic in local network control block
+
+Multicast IPs in [Local Network Control Block](https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml#multicast-addresses-1) (224.0.0.0/24)
+can only work in encap mode. Multicast traffic destined for those addresses
+is not expected to be forwarded, therefore, no multicast route will be
+configured for them. External hosts are not supposed to send and receive traffic
+with those addresses either.
+
+### Linux kernel
+
+If the following situations apply to your Nodes, you may observe multicast
+traffic is not routed correctly:
+
+1. Node kernel version under 5.4
+2. Node network doesn't support IGMP snooping
+
+### Antrea FlexibleIPAM
+
+The configuration option `multicastInterfaces` is not supported with
+[Antrea FlexibleIPAM](antrea-ipam.md#antrea-flexible-ipam). When Antrea
+FlexibleIPAM is enabled, multicast packets are forwarded to the uplink interface
+only.
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/antctl.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/antctl.md
new file mode 100644
index 00000000..01dab406
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/antctl.md
@@ -0,0 +1,150 @@
+# Antctl Multi-cluster commands
+
+Starting from version 1.6.0, Antrea supports the `antctl mc` commands, which can
+collect information from a leader cluster for troubleshooting Antrea
+Multi-cluster issues, deploy Antrea Multi-cluster and set up ClusterSets in both
+leader and member clusters. The `antctl mc get` command is supported since
+Antrea v1.6.0, while other commands are supported since v1.8.0. These commands
+cannot run inside the `antrea-controller`, `antrea-agent` or
+`antrea-mc-controller` Pods. antctl needs a kubeconfig file to access the target
+cluster's API server, and it will look for the kubeconfig file at
+`$HOME/.kube/config` by default. You can select a different file by setting the
+`KUBECONFIG` environment variable or with the `--kubeconfig` option of antctl.
+
+## antctl mc get
+
+- `antctl mc get clusterset` (or `get clustersets`) command prints all
+ClusterSets, a specified Clusterset, or the ClusterSet in a specified Namespace.
+- `antctl mc get resourceimport` (or `get resourceimports`, `get ri`) command
+prints all ResourceImports, a specified ResourceImport, or ResourceImports in a
+specified Namespace.
+- `antctl mc get resourceexport` (or `get resourceexports`, `get re`) command
+prints all ResourceExports, a specified ResourceExport, or ResourceExports in a
+specified Namespace.
+- `antctl mc get joinconfig` command prints member cluster join parameters of
+the ClusterSet in a specified leader cluster Namespace.
+- `antctl mc get membertoken` (or `get membertokens`) command prints all member tokens,
+a specified token, or member tokens in a specified Namespace. The command is supported
+only on a leader cluster.
+
+Using the `json` or `yaml` antctl output format can print more information of
+ClusterSet, ResourceImport, and ResourceExport than using the default table
+output format.
+
+```bash
+antctl mc get clusterset [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+antctl mc get resourceimport [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+antctl mc get resourceexport [NAME] [-n NAMESPACE] [-clusterid CLUSTERID] [-o json|yaml] [-A]
+antctl mc get joinconfig [--member-token TOKEN_NAME] [-n NAMESPACE]
+antctl mc get membertoken [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc get [subcommand] --help`.
+
+## antctl mc create
+
+`antctl mc create` command creates a token for member clusters to join a ClusterSet. The command will
+also create a Secret to store the token, as well as a ServiceAccount and a RoleBinding. The `--output-file`
+option saves the member token Secret manifest to a file.
+
+```bash
+anctcl mc create membertoken NAME -n NAMESPACE [-o OUTPUT_FILE]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc create [subcommand] --help`.
+
+## antctl mc delete
+
+`antctl mc delete` command deletes a member token of a ClusterSet. The command will delete the
+corresponding Secret, ServiceAccount and RoleBinding if they exist.
+
+```bash
+anctcl mc delete membertoken NAME -n NAMESPACE
+```
+
+To see the usage examples of these commands, you may also run `antctl mc delete [subcommand] --help`.
+
+## antctl mc deploy
+
+`antctl mc deploy` command deploys Antrea Multi-cluster Controller to a leader or member cluster.
+
++ `antctl mc deploy leadercluster` command deploys Antrea Multi-cluster Controller to a leader cluster and imports
+ all the Antrea Multi-cluster CRDs.
++ `antctl mc deploy membercluster` command deploys Antrea Multi-cluster Controller to a member cluster and imports
+ all the Antrea Multi-cluster CRDs.
+
+```bash
+antctl mc deploy leadercluster -n NAMESPACE [--antrea-version ANTREA_VERSION] [-f PATH_TO_MANIFEST]
+antctl mc deploy membercluster -n NAMESPACE [--antrea-version ANTREA_VERSION] [-f PATH_TO_MANIFEST]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc deploy [subcommand] --help`.
+
+## antctl mc init
+
+`antctl mc init` command initializes an Antrea Multi-cluster ClusterSet in a leader cluster. It will create a
+ClusterSet for the leader cluster. If the `-j|--join-config-file` option is specified, the ClusterSet join
+parameters will be saved to the specified file, which can be used in the `antctl mc join` command
+for a member cluster to join the ClusterSet.
+
+```bash
+antctl mc init -n NAMESPACE --clusterset CLUSTERSET_ID --clusterid CLUSTERID [--create-token] [-j JOIN_CONFIG_FILE]
+```
+
+To see the usage examples of this command, you may also run `antctl mc init --help`.
+
+## antctl mc join
+
+`antctl mc join` command lets a member cluster join an existing Antrea Multi-cluster ClusterSet. It will create a
+ClusterSet for the member cluster. Users can use command line options or a config file (which can be the output
+file of the `anctl mc init` command) to specify the ClusterSet join arguments.
+
+When the config file is provided, the command line options may be overridden by the file. A token is needed for a
+member cluster to access the leader cluster API server. Users can either specify a pre-created token Secret with the
+`--token-secret-name` option, or pass a Secret manifest to create the Secret with either the `--token-secret-file`
+option or the config file.
+
+```bash
+antctl mc join --clusterset=CLUSTERSET_ID \
+ --clusterid=CLUSTER_ID \
+ --namespace=[MEMBER_NAMESPACE] \
+ --leader-clusterid=LEADER_CLUSTER_ID \
+ --leader-namespace=LEADER_NAMESPACE \
+ --leader-apiserver=LEADER_APISERVER \
+ --token-secret-name=[TOKEN_SECRET_NAME] \
+ --token-secret-file=[TOKEN_SECRET_FILE]
+
+antctl mc join --config-file JOIN_CONFIG_FILE [--clusterid=CLUSTER_ID] [--token-secret-name=TOKEN_SECRET_NAME] [--token-secret-file=TOKEN_SECRET_FILE]
+```
+
+Below is a config file example:
+
+```yaml
+apiVersion: multicluster.antrea.io/v1alpha1
+kind: ClusterSetJoinConfig
+clusterSetID: clusterset1
+clusterID: cluster-east
+namespace: kube-system
+leaderClusterID: cluster-north
+leaderNamespace: antrea-multicluster
+leaderAPIServer: https://172.18.0.3:6443
+tokenSecretName: cluster-east-token
+```
+
+## antctl mc leave
+
+`antctl mc leave` command lets a member cluster leave a ClusterSet. It will delete the ClusterSet
+and other resources created by antctl for the member cluster.
+
+```bash
+antctl mc leave --clusterset CLUSTERSET_ID --namespace [NAMESPACE]
+```
+
+## antctl mc destroy
+
+`antctl mc destroy` command can destroy an Antrea Multi-cluster ClusterSet in a leader cluster. It will delete the
+ClusterSet and other resources created by antctl for the leader cluster.
+
+```bash
+antctl mc destroy --clusterset=CLUSTERSET_ID --namespace NAMESPACE
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/api.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/api.md
new file mode 100644
index 00000000..d9159193
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/api.md
@@ -0,0 +1,36 @@
+# Antrea Multi-cluster API
+
+This document lists all the API resource versions currently supported by Antrea Mulit-cluster.
+
+Antrea Multi-cluster is supported since v1.5.0. Most Custom Resource Definitions (CRDs)
+used by Antrea Multi-cluster are in the API group `multicluster.crd.antrea.io`, and
+two CRDs from [mcs-api](https://github.com/kubernetes-sigs/mcs-api) are in group `multicluster.x-k8s.io`
+which is defined by Kubernetes upstream [KEP-1645](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api).
+
+## Currently-supported
+
+### CRDs in `multicluster.crd.antrea.io`
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+| ------------------------ | ----------- | ------------- | ----------------------------------- | --------------- |
+| `ClusterSets` | v1alpha2 | v1.13.0 | N/A | N/A |
+| `MemberClusterAnnounces` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ResourceExports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ResourceImports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `Gateway` | v1alpha1 | v1.7.0 | N/A | N/A |
+| `ClusterInfoImport` | v1alpha1 | v1.7.0 | N/A | N/A |
+
+### CRDs in `multicluster.x-k8s.io`
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+| ---------------- | ----------- | ------------- | ----------------------------------- | --------------- |
+| `ServiceExports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ServiceImports` | v1alpha1 | v1.5.0 | N/A | N/A |
+
+## Previously-supported
+
+| CRD | API group | CRD version | Introduced in | Deprecated in | Removed in |
+| ------------------------ | ---------------------------- | ----------- | ------------- | ------------- | ---------- |
+| `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.8.0 | v1.8.0 |
+| `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha2 | v1.8.0 | v1.13.0 | v1.13.0 |
+| `ClusterSets` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.13.0 | N/A |
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/architecture.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/architecture.md
new file mode 100644
index 00000000..8465be7e
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/architecture.md
@@ -0,0 +1,211 @@
+# Antrea Multi-cluster Architecture
+
+Antrea Multi-cluster implements [Multi-cluster Service API](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api),
+which allows users to create multi-cluster Services that can be accessed cross
+clusters in a ClusterSet. Antrea Multi-cluster also supports Antrea
+ClusterNetworkPolicy replication. Multi-cluster admins can define
+ClusterNetworkPolicies to be replicated across the entire ClusterSet, and
+enforced in all member clusters.
+
+An Antrea Multi-cluster ClusterSet includes a leader cluster and multiple member
+clusters. Antrea Multi-cluster Controller needs to be deployed in the leader and
+all member clusters. A cluster can serve as the leader, and meanwhile also be a
+member cluster of the ClusterSet.
+
+The diagram below depicts a basic Antrea Multi-cluster topology with one leader
+cluster and two member clusters.
+
+{{< img src="assets/basic-topology.svg" width="650" alt="Antrea Multi-cluster Topology" >}}
+
+## Terminology
+
+ClusterSet is a placeholder name for a group of clusters with a high degree of mutual
+trust and shared ownership that share Services amongst themselves. Within a ClusterSet,
+Namespace sameness applies, which means all Namespaces with a given name are considered to
+be the same Namespace. The ClusterSet Custom Resource Definition(CRD) defines a ClusterSet
+including the leader and member clusters information.
+
+The MemberClusterAnnounce CRD declares a member cluster configuration to the leader cluster.
+
+The Common Area is an abstraction in the Antrea Multi-cluster implementation provides a storage
+interface for resource export/import that can be read/written by all member and leader clusters
+in the ClusterSet. The Common Area is implemented with a Namespace in the leader cluster for a
+given ClusterSet.
+
+## Antrea Multi-cluster Controller
+
+Antrea Multi-cluster Controller implements ClusterSet management and resource
+export/import in the ClusterSet. In either a leader or a member cluster, Antrea
+Multi-cluster Controller is deployed with a Deployment of a single replica, but
+it takes different responsibilities in leader and member clusters.
+
+### ClusterSet Establishment
+
+In a member cluster, Multi-cluster Controller watches and validates the ClusterSet,
+and creates a MemberClusterAnnounce CR in the Common Area of the leader cluster to
+join the ClusterSet.
+
+In the leader cluster, Multi-cluster controller watches, validates and initializes
+the ClusterSet. It also validates the MemberClusterAnnounce CR created by a member
+cluster and updates the member cluster's connection status to `ClusterSet.Status`.
+
+### Resource Export and Import
+
+In a member cluster, Multi-cluster controller watches exported resources (e.g.
+ServiceExports, Services, Multi-cluster Gateways), encapsulates an exported
+resource into a ResourceExport and creates the ResourceExport CR in the Common
+Area of the leader cluster.
+
+In the leader cluster, Multi-cluster Controller watches ResourceExports created
+by member clusters (in the case of Service and ClusterInfo export), or by the
+ClusterSet admin (in the case of Multi-cluster NetworkPolicy), converts
+ResourceExports to ResourceImports, and creates the ResourceImport CRs in the
+Common Area for member clusters to import them. Multi-cluster Controller also
+merges ResourceExports from different member clusters to a single
+ResourceImport, when these exported resources share the same kind, name, and
+original Namespace (matching Namespace sameness).
+
+Multi-cluster Controller in a member cluster also watches ResourceImports in the
+Common Area of the leader cluster, decapsulates the resources from them, and
+creates the resources (e.g. Services, Endpoints, Antrea ClusterNetworkPolicies,
+ClusterInfoImports) in the member cluster.
+
+For more information about multi-cluster Service export/import, please also check
+the [Service Export and Import](#service-export-and-import) section.
+
+## Multi-cluster Service
+
+### Service Export and Import
+
+{{< img src="assets/resource-export-import-pipeline.svg" width="1500" alt="Antrea Multi-cluster Service Export/Import Pipeline" >}}
+
+Antrea Multi-cluster Controller implements Service export/import among member
+clusters. The above diagram depicts Antrea Multi-cluster resource export/import
+pipeline, using Service export/import as an example.
+
+Given two Services with the same name and Namespace in two member clusters -
+`foo.ns.cluster-a.local` and `foo.ns.cluster-b.local`, a multi-cluster Service can
+be created by the following resource export/import workflow.
+
+* User creates a ServiceExport `foo` in Namespace `ns` in each of the two
+clusters.
+* Multi-cluster Controllers in `cluster-a` and `cluster-b` see ServiceExport
+`foo`, and both create two ResourceExports for the Service and Endpoints
+respectively in the Common Area of the leader cluster.
+* Multi-cluster Controller in the leader cluster sees the ResourcesExports in
+the Common Area, including the two for Service `foo`: `cluster-a-ns-foo-service`,
+`cluster-b-ns-foo-service`; and the two for the Endpoints:
+`cluster-a-ns-foo-endpoints`, `cluster-b-ns-foo-endpoints`. It then creates a
+ResourceImport `ns-foo-service` for the multi-cluster Service; and a
+ResourceImport `ns-foo-endpoints` for the Endpoints, which includes the
+exported endpoints of both `cluster-a-ns-foo-endpoints` and
+`cluster-b-ns-foo-endpoints`.
+* Multi-cluster Controller in each member cluster watches the ResourceImports
+from the Common Area, decapsulates them and gets Service `ns/antrea-mc-foo` and
+Endpoints `ns/antrea-mc-foo`, and creates the Service and Endpoints, as well as
+a ServiceImport `foo` in the local Namespace `ns`.
+
+### Service Access Across Clusters
+
+Since Antrea v1.7.0, the Service's ClusterIP is exported as the multi-cluster
+Service's Endpoints. Multi-cluster Gateways must be configured to support
+multi-cluster Service access across member clusters, and Service CIDRs cannot
+overlap between clusters. Please refer to [Multi-cluster Gateway](#multi-cluster-gateway)
+for more information. Before Antrea v1.7.0, Pod IPs are exported as the
+multi-cluster Service's Endpoints. Pod IPs must be directly reachable across
+clusters for multi-cluster Service access, and Pod CIDRs cannot overlap between
+clusters. Antrea Multi-cluster only supports creating multi-cluster Services
+for Services of type ClusterIP.
+
+## Multi-cluster Gateway
+
+Antrea started to support Multi-cluster Gateway since v1.7.0. User can choose
+one K8s Node as the Multi-cluster Gateway in a member cluster. The Gateway Node
+is responsible for routing all cross-clusters traffic from the local cluster to
+other member clusters through tunnels. The diagram below depicts Antrea
+Multi-cluster connectivity with Multi-cluster Gateways.
+
+{{< img src="assets/mc-gateway.svg" width="800" alt="Antrea Multi-cluster Gateway" >}}
+
+Antrea Agent is responsible for setting up tunnels between Gateways of member
+clusters. The tunnels between Gateways use Antrea Agent's configured tunnel type.
+All member clusters in a ClusterSet need to deploy Antrea with the same tunnel
+type.
+
+The Multi-cluster Gateway implementation introduces two new CRDs `Gateway` and
+`ClusterInfoImport`. `Gateway` includes the local Multi-cluster Gateway
+information including: `internalIP` for tunnels to local Nodes, and `gatewayIP`
+for tunnels to remote cluster Gateways. `ClusterInfoImport` includes Gateway
+and network information of member clusters, including Gateway IPs and Service
+CIDRs. The existing esource export/import pipeline is leveraged to exchange
+the cluster network information among member clusters, generating
+ClusterInfoImports in each member cluster.
+
+### Multi-cluster Service Traffic Walk
+
+Let's use the ClusterSet in the above diagram as an example. As shown in the
+diagram:
+
+1. Cluster A has a client Pod named `pod-a` running on a regular Node, and a
+ multi-cluster Service named `antrea-mc-nginx` with ClusterIP `10.112.10.11`
+ in the `default` Namespace.
+2. Cluster B exported a Service named `nginx` with ClusterIP `10.96.2.22` in
+ the `default` Namespace. The Service has one Endpoint `172.170.11.22` which is
+ `pod-b`'s IP.
+3. Cluster C exported a Service named `nginx` with ClusterIP `10.11.12.33` also
+ in the `default` Namespace. The Service has one Endpoint `172.10.11.33` which
+ is `pod-c`'s IP.
+
+The multi-cluster Service `antrea-mc-nginx` in cluster A will have two
+Endpoints:
+
+* `nginx` Service's ClusterIP `10.96.2.22` from cluster B.
+* `nginx` Service's ClusterIP `10.11.12.33` from cluster C.
+
+When the client Pod `pod-a` on cluster A tries to access the multi-cluster
+Service `antrea-mc-nginx`, the request packet will first go through the Service
+load balancing pipeline on the source Node `node-a2`, with one endpoint of the
+multi-cluster Service being chosen as the destination. Let's say endpoint
+`10.11.12.33` from cluster C is chosen, then the request packet will be DNAT'd
+with IP `10.11.12.33` and tunnelled to the local Gateway Node `node-a1`.
+`node-a1` knows from the destination IP (`10.11.12.33`) the packet is
+multi-cluster Service traffic destined for cluster C, and it will tunnel the
+packet to cluster C's Gateway Node `node-c1`, after performing SNAT and setting
+the packet's source IP to its own Gateway IP. On `node-c1`, the packet will go
+through the Service load balancing pipeline again with an endpoint of Service
+`nginx` being chosen as the destination. As the Service has only one endpoint -
+`172.10.11.33` of `pod-c`, the request packet will be DNAT'd to `172.10.11.33`
+and tunnelled to `node-c2` where `pod-c` is running. Finally, on `node-c2` the
+packet will go through the normal Antrea forwarding pipeline and be forwarded
+to `pod-c`.
+
+## Antrea Multi-cluster NetworkPolicy
+
+At this moment, Antrea does not support Pod-level policy enforcement for
+cross-cluster traffic. Access towards multi-cluster Services can be regulated
+with Antrea ClusterNetworkPolicy `toService` rules. In each member cluster,
+users can create an Antrea ClusterNetworkPolicy selecting Pods in that cluster,
+with the imported Mutli-cluster Service name and Namespace in an egress
+`toService` rule, and the Action to take for traffic matching this rule.
+For more information regarding Antrea ClusterNetworkPolicy (ACNP), refer
+to [this document](../antrea-network-policy.md).
+
+Multi-cluster admins can also specify certain ClusterNetworkPolicies to be
+replicated across the entire ClusterSet. The ACNP to be replicated should
+be created as a ResourceExport in the leader cluster, and the resource
+export/import pipeline will ensure member clusters receive this ACNP spec
+to be replicated. Each member cluster's Multi-cluster Controller will then
+create an ACNP in their respective clusters.
+
+## Antrea Traffic Modes
+
+Multi-cluster Gateway supports all of `encap`, `noEncap`, `hybrid`, and
+`networkPolicyOnly` modes. In all supported modes, the cross-cluster traffic
+is routed by Multi-cluster Gateways of member clusters, and the traffic goes
+through Antrea overlay tunnels between Gateways. In `noEncap`, `hybrid`, and
+`networkPolicyOnly` modes, even when in-cluster Pod traffic does not go through
+tunnels, antrea-agent still creates tunnels between the Gateway Node and other
+Nodes, and routes cross-cluster traffic to reach the Gateway through the tunnels.
+Specially for [`networkPolicyOnly` mode](../design/policy-only.md), Antrea only
+handles multi-cluster traffic routing, while the primary CNI takes care of in-cluster
+traffic routing.
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/basic-topology.svg b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/basic-topology.svg
new file mode 100644
index 00000000..d6f5f08e
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/basic-topology.svg
@@ -0,0 +1,548 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/mc-gateway.svg b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/mc-gateway.svg
new file mode 100644
index 00000000..20dd6985
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/mc-gateway.svg
@@ -0,0 +1,1026 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/resource-export-import-pipeline.svg b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/resource-export-import-pipeline.svg
new file mode 100644
index 00000000..26c3450d
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/resource-export-import-pipeline.svg
@@ -0,0 +1,665 @@
+
+
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/sample-clusterset.svg b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/sample-clusterset.svg
new file mode 100644
index 00000000..5418bd09
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/assets/sample-clusterset.svg
@@ -0,0 +1,565 @@
+
+
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/policy-only-mode.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/policy-only-mode.md
new file mode 100644
index 00000000..2bcb502e
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/policy-only-mode.md
@@ -0,0 +1,82 @@
+# Antrea Multi-cluster with NetworkPolicy Only Mode
+
+Multi-cluster Gateway works with Antrea `networkPolicyOnly` mode, in which
+cross-cluster traffic is routed by Multi-cluster Gateways of member clusters,
+and the traffic goes through Antrea overlay tunnels between Gateways and local
+cluster Pods. Pod traffic within a cluster is still handled by the primary CNI,
+not Antrea.
+
+## Deploying Antrea in `networkPolicyOnly` mode with Multi-cluster feature
+
+This section describes steps to deploy Antrea in `networkPolicyOnly` mode
+with the Multi-cluster feature enabled on an EKS cluster.
+
+You can follow [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create an EKS cluster, and follow the [Antrea EKS installation guide](../eks-installation.md)
+to deploy Antrea to an EKS cluster. Please note there are a few changes required
+by Antrea Multi-cluster. You should set the following configuration parameters in
+ `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster`
+ feature and Antrea Multi-cluster Gateway:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: "" # Change to the Namespace where antrea-mc-controller is deployed.
+```
+
+Repeat the same steps to deploy Antrea for all member clusters in a ClusterSet.
+Besides the Antrea deployment, you also need to deploy Antrea Multi-cluster Controller
+in each member cluster. Make sure the Service CIDRs (ClusterIP ranges) must not overlap
+among the member clusters. Please refer to [the quick start guide](./quick-start.md)
+or [the user guide](./user-guide.md) to learn more information about how to configure
+a ClusterSet.
+
+## Connectivity between Clusters
+
+When EKS clusters of a ClusterSet are in different VPCs, you may need to enable connectivity
+between VPCs to support Multi-cluster traffic. You can check the following steps to set up VPC
+connectivity for a ClusterSet.
+
+In the following descriptions, we take a ClusterSet with two member clusters in two VPCs as
+an example to describe the VPC configuration.
+
+| Cluster ID | PodCIDR | Gateway IP |
+| ------------ | ------------- | ------------ |
+| west-cluster | 110.13.0.0/16 | 110.13.26.12 |
+| east-cluster | 110.14.0.0/16 | 110.14.18.50 |
+
+### VPC Peering Configuration
+
+When the Gateway Nodes do not have public IPs, you may create a VPC peering connection between
+the two VPCs for the Gateways to reach each other. You can follow the
+[AWS documentation](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) to
+configure VPC peering.
+
+You also need to add a route to the route tables of the Gateway Nodes' subnets, to enable
+routing across the peering connection. For `west-cluster`, the route should have `east-cluster`'s
+Pod CIDR: `110.14.0.0/16` to be the destination, and the peering connection to be the target;
+for `east-cluster`, the route should have `west-cluster`'s Pod CIDR: `110.13.0.0/16` to be the
+destination. To learn more about VPC peering routes, please refer to the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html).
+
+### Security Groups
+
+AWS security groups may need to be configured to allow tunnel traffic to Multi-cluster Gateways,
+especially when the member clusters are in different VPCs. EKS should have already created a
+security group for each cluster, which should have a description like "EKS created security group
+applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.".
+You can add a new rule to the security group for Gateway traffic. For `west-cluster`, add an inbound
+rule with source to be `east-cluster`'s Gateway IP `110.14.18.50/32`; for `east-cluster`, the source
+should be `west-cluster`'s Gateway IP `110.13.26.12/32`.
+
+By default, Multi-cluster Gateway IP should be the `InternalIP` of the Gateway Node, but you may
+configure Antrea Multi-cluster to use the Node `ExternalIP`. Please use the right Node IP address
+as the Gateway IP in the security group rule.
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/quick-start.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/quick-start.md
new file mode 100644
index 00000000..035de57c
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/quick-start.md
@@ -0,0 +1,326 @@
+# Antrea Multi-cluster Quick Start
+
+In this quick start guide, we will set up an Antrea Multi-cluster ClusterSet
+with two clusters. One cluster will serve as the leader of the ClusterSet, and
+meanwhile also join as a member cluster; another cluster will be a member only.
+Antrea Multi-cluster supports two types of IP addresses as multi-cluster
+Service endpoints - exported Services' ClusterIPs or backend Pod IPs.
+We use the default `ClusterIP` endpoint type for multi-cluster Services
+in this guide.
+
+The diagram below shows the two clusters and the ClusterSet to be created (for
+simplicity, the diagram just shows two Nodes for each cluster).
+
+{{< img src="assets/sample-clusterset.svg" width="800" alt="Antrea Multi-cluster Example ClusterSet" >}}
+
+## Preparation
+
+We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea
+version is set to an environment variable `TAG`. For example, the following
+command sets the Antrea version to `v1.8.0`.
+
+```bash
+export TAG=v1.8.0
+```
+
+To use the latest version of Antrea Multi-cluster from the Antrea main branch,
+you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/`
+when applying or downloading an Antrea YAML manifest.
+
+Antrea must be deployed in both cluster A and cluster B, and the `Multicluster`
+feature of `antrea-agent` must be enabled to support multi-cluster Services. As we
+use `ClusterIP` endpoint type for multi-cluster Services, an Antrea Multi-cluster
+Gateway needs be set up in each member cluster to route Service traffic across clusters,
+and two clusters **must have non-overlapping Service CIDRs**. Set the following
+configuration parameters in `antrea-agent.conf` of the Antrea deployment
+manifest to enable the `Multicluster` feature:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: ""
+```
+
+At the moment, Multi-cluster Gateway only works with the Antrea `encap` traffic
+mode, and all member clusters in a ClusterSet must use the same tunnel type.
+
+## Steps with antctl
+
+`antctl` provides a couple of commands to facilitate deployment, configuration,
+and troubleshooting of Antrea Multi-cluster. This section describes the steps
+to deploy Antrea Multi-cluster and set up the example ClusterSet using `antctl`.
+A [further section](#steps-with-yaml-manifests) will describe the steps to
+achieve the same using YAML manifests.
+
+To execute any command in this section, `antctl` needs access to the target
+cluster's API server, and it needs a kubeconfig file for that. Please refer to
+the [`antctl` Multi-cluster manual](antctl.md) to learn more about the
+kubeconfig file configuration, and the `antctl` Multi-cluster commands. For
+installation of `antctl`, please refer to the [installation guide](../antctl.md#installation).
+
+### Set up Leader and Member in Cluster A
+
+#### Step 1 - deploy Antrea Multi-cluster Controllers for leader and member
+
+Run the following commands to deploy Multi-cluster Controller for the leader
+into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be
+created by the commands), and Multi-cluster Controller for the member into
+Namepsace `kube-system`.
+
+```bash
+kubectl create ns antrea-multicluster
+antctl mc deploy leadercluster -n antrea-multicluster --antrea-version $TAG
+antctl mc deploy membercluster -n kube-system --antrea-version $TAG
+```
+
+You can run the following command to verify the the leader and member
+`antrea-mc-controller` Pods are deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+antrea-multicluster pod/antrea-mc-controller-cd7bf8f68-kh4kz 1/1 Running 0 50s
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 48s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+antrea-multicluster deployment.apps/antrea-mc-controller 1/1 1 1 50s
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 48s
+```
+
+#### Step 2 - initialize ClusterSet
+
+Run the following commands to create a ClusterSet with cluster A to be the
+leader, and also join the ClusterSet as a member.
+
+```bash
+antctl mc init --clusterset test-clusterset --clusterid test-cluster-leader -n antrea-multicluster --create-token -j join-config.yml
+antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml
+```
+
+The above `antctl mc init` command creates a default token (with the
+`--create-token` flag) for member clusters to join the ClusterSet and
+authenticate to the leader cluster API server, and the command saves the token
+Secret manifest and other ClusterSet join arguments to file `join-config.yml`
+(specified with the `-o` option), which can be provided to the `antctl mc join`
+command (with the `--config-file` option) to join the ClusterSet with these
+arguments. If you want to use a separate token for each member cluster for
+security considerations, you can run the following commands to create a token
+and use the token (together with the previously generated configuration file
+`join-config.yml`) to join the ClusterSet:
+
+```bash
+antctl mc create membertoken test-cluster-leader-token -n antrea-multicluster -o test-cluster-leader-token.yml
+antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml --token-secret-file test-cluster-leader-token.yml
+```
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Last, you need to choose at least one Node in cluster A to serve as the
+Multi-cluster Gateway. The Node should have an IP that is reachable from the
+cluster B's Gateway Node, so a tunnel can be created between the two Gateways.
+For more information about Multi-cluster Gateway, please refer to the
+[Multi-cluster User Guide](user-guide.md#multi-cluster-gateway-configuration).
+
+Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run
+the following command to annotate the Node with:
+`multicluster.antrea.io/gateway=true` (so Antrea can know it is the Gateway
+Node from the annotation):
+
+```bash
+kubectl annotate node node-a1 multicluster.antrea.io/gateway=true
+```
+
+### Set up Cluster B
+
+Let us switch to cluster B. All the `kubectl` and `antctl` commands in the
+following steps should be run with the `kubeconfig` for cluster B.
+
+#### Step 1 - deploy Antrea Multi-cluster Controller for member
+
+Run the following command to deploy the member Multi-cluster Controller into
+Namespace `kube-system`.
+
+```bash
+antctl mc deploy membercluster -n kube-system --antrea-version $TAG
+```
+
+You can run the following command to verify the `antrea-mc-controller` Pod is
+deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s
+```
+
+#### Step 2 - join ClusterSet
+
+Run the following command to make cluster B join the ClusterSet:
+
+```bash
+antctl mc join --clusterid test-cluster-member -n kube-system --config-file join-config.yml
+```
+
+`join-config.yml` is generated when creating the ClusterSet in cluster A. Again,
+you can also run the `antctl mc create membertoken` in the leader cluster
+(cluster A) to create a separate token for cluster B, and join using that token,
+rather than the default token in `join-config.yml`.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster
+B, run the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-b1 multicluster.antrea.io/gateway=true
+```
+
+## What is Next
+
+So far, we set up an Antrea Multi-cluster ClusterSet with two clusters following
+the above sections of this guide. Next, you can start to consume the Antrea
+Multi-cluster features with the ClusterSet, including [Multi-cluster Services](user-guide.md#multi-cluster-service),
+[Multi-cluster NetworkPolicy](user-guide.md#multi-cluster-networkpolicy), and
+[ClusterNetworkPolicy replication](user-guide.md#clusternetworkpolicy-replication),
+Please check the relevant Antrea Multi-cluster User Guide sections to learn more.
+
+If you want to add a new member cluster to your ClusterSet, you can follow the
+steps for cluster B to do so. For example, you can run the following command to
+join the ClusterSet in a member cluster with ID `test-cluster-member2`:
+
+```bash
+antctl mc join --clusterid test-cluster-member2 -n kube-system --config-file join-config.yml
+```
+
+## Steps with YAML Manifests
+
+### Set up Leader and Member in Cluster A
+
+#### Step 1 - deploy Antrea Multi-cluster Controllers for leader and member
+
+Run the following commands to deploy Multi-cluster Controller for the leader
+into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be
+created by the commands), and Multi-cluster Controller for the member into
+Namepsace `kube-system`.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-global.yml
+kubectl create ns antrea-multicluster
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+#### Step 2 - initialize ClusterSet
+
+Antrea provides several template YAML manifests to set up a ClusterSet quicker.
+You can run the following commands that use the template manifests to create a
+ClusterSet named `test-clusterset` in the leader cluster and a default token
+for the member clusters (both cluster A and B in our case) to join the
+ClusterSet.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-clusterset-template.yml
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-access-token-template.yml
+kubectl get secret default-member-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > default-member-token.yml
+```
+
+The last command saves the token Secret manifest to `default-member-token.yml`,
+which will be needed for member clusters to join the ClusterSet. Note, in this
+example, we use a shared token for all member clusters. If you want to use a
+separate token for each member cluster for security considerations, you can
+follow the instructions in the [Multi-cluster User Guide](user-guide.md#set-up-access-to-leader-cluster).
+
+Next, run the following commands to make cluster A join the ClusterSet also as a
+member:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's/test-cluster-member/test-cluster-leader/g' -e 's//172.10.0.11/g' member-clusterset.yml | kubectl apply -f -
+```
+
+Here, `172.10.0.11` is the `kube-apiserver` IP of cluster A. You should replace
+it with the `kube-apiserver` IP of your leader cluster.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run
+the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-a1 multicluster.antrea.io/gateway=true
+```
+
+### Set up Cluster B
+
+Let us switch to cluster B. All the `kubectl` commands in the following steps
+should be run with the `kubeconfig` for cluster B.
+
+#### Step 1 - deploy Antrea Multi-cluster Controller for member
+
+Run the following command to deploy the member Multi-cluster Controller into
+Namespace `kube-system`.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+You can run the following command to verify the `antrea-mc-controller` Pod is
+deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s
+```
+
+#### Step 2 - join ClusterSet
+
+Run the following commands to make cluster B join the ClusterSet:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's//172.10.0.11/g' member-clusterset.yml | kubectl apply -f -
+```
+
+`default-member-token.yml` saves the default member token which was generated
+when initializing the ClusterSet in cluster A.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster
+B, run the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-b1 multicluster.antrea.io/gateway=true
+```
+
+### Add new member clusters
+
+If you want to add a new member cluster to your ClusterSet, you can follow the
+steps for cluster B to do so. Remember to update the member cluster ID `spec.clusterID`
+in `member-clusterset-template.yml` to the new member cluster's ID in the step 2 of
+joining ClusterSet. For example, you can run the following commands to join the
+ClusterSet in a member cluster with ID `test-cluster-member2`:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's//172.10.0.11/g' -e 's/test-cluster-member/test-cluster-member2/g' member-clusterset.yml | kubectl apply -f -
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/upgrade.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/upgrade.md
new file mode 100644
index 00000000..f9229490
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/upgrade.md
@@ -0,0 +1,128 @@
+# Antrea Multi-cluster Upgrade Guide
+
+The Antrea Multi-cluster feature is introduced from v1.5.0. There is no data-plane
+related changes from release v1.5.0, so Antrea deployment and Antrea Multi-cluster
+deployment are indenpendent. However, we suggest to keep Antrea and Antrea Multi-cluster
+in the same version considering there will be data-plane change involved in the future.
+Please refer to [Antrea upgrade and supported version skew](../versioning.md#antrea-upgrade-and-supported-version-skew)
+to learn the requirement of Antrea upgrade. This doc focuses on Multi-cluster deployment only.
+
+The goal is to support 'graceful' upgrade. Multi-cluster upgrade will not have disruption
+to data-plane of member clusters, but there can be downtime of processing new configurations
+when individual components restart:
+
+- During Leader Controller restart, a new member cluster, ClusterSet or ResourceExport will
+ not be processed. This is because the Controller also runs the validation webhooks for
+ MemberClusterAnnounce, ClusterSet and ResourceExport.
+- During Member Controller restart, a new ClusterSet will not be processed, this is because
+ the Controller runs the validation webhooks for ClusterSet.
+
+Our goal is to support version skew for different Antrea Multi-cluster components, but the
+Multi-cluster feature is still in Alpha version, and the API is not stable yet. Our recommendation
+is always to upgrade Antrea Multi-cluster to the same version for a ClusterSet.
+
+- **Antrea Leader Controller**: must be upgraded first
+- **Antrea Member Controller**: must the same version as the **Antrea Leader Controller**.
+- **Antctl**: must not be newer than the **Antrea Leader/Member Controller**. Please
+ notice Antctl for Multi-cluster is added since v1.6.0.
+
+## Upgrade in one ClusterSet
+
+In one ClusterSet, We recommend all member and leader clusters deployed with the same version.
+During Leader controller upgrade, resource export/import between member clusters is not
+supported. Before all member clusters are upgraded to the same version as Leader controller,
+the feature introduced in old version should still work cross clusters, but no guarantee
+for the feature in new version.
+
+It should have no impact during upgrade to those imported resources like Service, Endpoints
+or AntreaClusterNetworkPolicy.
+
+## Upgrade from a version prior to v1.13
+
+Prior to Antrea v1.13, the `ClusterClaim` CRD is used to define both the local Cluster ID and
+the ClusterSet ID. Since Antrea v1.13, the `ClusterClaim` CRD is removed, and the `ClusterSet`
+CRD solely defines a ClusterSet. The name of a `ClusterSet` CR must match the ClusterSet ID,
+and a new `clusterID` field specifies the local Cluster ID.
+
+After upgrading Antrea Multi-cluster Controller from a version older than v1.13, the new version
+Multi-cluster Controller can still recognize and work with the old version `ClusterClaim` and
+`ClusterSet` CRs. However, we still suggest updating the `ClusterSet` CR to the new version after
+upgrading Multi-cluster Controller. You just need to update the existing `ClusterSet` CR and add the
+right `clusterID` to the spec. An example `ClusterSet` CR is like the following:
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset # This value must match the ClusterSet ID.
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-north # The new added field since v1.13.
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-north-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+You may also delete the `ClusterClaim` CRD after the upgrade, and then all existing `ClusterClaim`
+CRs will be removed automatically after the CRD is deleted.
+
+```bash
+kubectl delete crds clusterclaims.multicluster.crd.antrea.io
+```
+
+## APIs deprecation policy
+
+The Antrea Multi-cluster APIs are built using K8s CustomResourceDefinitions and we
+follow the same versioning scheme as the K8s APIs and the same [deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).
+
+Other than the most recent API versions in each track, older API versions must be
+supported after their announced deprecation for a duration of no less than:
+
+- GA: 12 months
+- Beta: 9 months
+- Alpha: N/A (can be removed immediately)
+
+K8s has a [moratorium](https://github.com/kubernetes/kubernetes/issues/52185) on the
+removal ofAPI object versions that have been persisted to storage. We adopt the following
+rules for the CustomResources which are persisted by the K8s apiserver.
+
+- Alpha API versions may be removed at any time.
+- The [`deprecated` field](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation) must be used for CRDs to indicate that a particular version of
+ the resource has been deprecated.
+- Beta and GA API versions must be supported after deprecation for the respective
+ durations stipulated above before they can be removed.
+- For deprecated Beta and GA API versions, a [conversion webhook](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) must be provided along with
+ each Antrea release, until the API version is removed altogether.
+
+## Supported K8s versions
+
+Please refer to [Supported K8s versions](../versioning.md#supported-k8s-versions)
+to learn the details.
+
+## Feature list
+
+Following is the Antrea Multi-cluster feature list. For the details of each feature,
+please refer to [Antrea Multi-cluster Architecture](./architecture.md).
+
+| Feature | Supported in |
+| -------------------------------- | ------------ |
+| Service Export/Import | v1.5.0 |
+| ClusterNetworkPolicy Replication | v1.6.0 |
+
+## Known Issues
+
+When you are trying to directly apply a newer Antrea Multi-cluster YAML manifest, as
+provided with [an Antrea release](https://github.com/antrea-io/antrea/releases), you will
+probably meet an issue like below if you are upgrading Multi-cluster components
+from v1.5.0 to a newer one:
+
+```log
+label issue:The Deployment "antrea-mc-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"antrea", "component":"antrea-mc-controller"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
+```
+
+The issue is caused by the label change introduced by [PR3266](https://github.com/antrea-io/antrea/pull/3266).
+The reason is mutation of label selectors on Deployments is not allowed in `apps/v1beta2`
+and forward. You need to delete the Deployment "antrea-mc-controller" first, then run
+`kubectl apply -f` with the manifest of the newer version.
diff --git a/content/docs/v2.2.0-alpha.2/docs/multicluster/user-guide.md b/content/docs/v2.2.0-alpha.2/docs/multicluster/user-guide.md
new file mode 100644
index 00000000..025dc198
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/multicluster/user-guide.md
@@ -0,0 +1,910 @@
+# Antrea Multi-cluster User Guide
+
+## Table of Contents
+
+
+- [Quick Start](#quick-start)
+- [Installation](#installation)
+ - [Preparation](#preparation)
+ - [Deploy Antrea Multi-cluster Controller](#deploy-antrea-multi-cluster-controller)
+ - [Deploy in a Dedicated Leader Cluster](#deploy-in-a-dedicated-leader-cluster)
+ - [Deploy in a Member Cluster](#deploy-in-a-member-cluster)
+ - [Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster)
+ - [Create ClusterSet](#create-clusterset)
+ - [Set up Access to Leader Cluster](#set-up-access-to-leader-cluster)
+ - [Initialize ClusterSet](#initialize-clusterset)
+ - [Initialize ClusterSet for a Dual-role Cluster](#initialize-clusterset-for-a-dual-role-cluster)
+- [Multi-cluster Gateway Configuration](#multi-cluster-gateway-configuration)
+ - [Multi-cluster WireGuard Encryption](#multi-cluster-wireguard-encryption)
+- [Multi-cluster Service](#multi-cluster-service)
+- [Multi-cluster Pod-to-Pod Connectivity](#multi-cluster-pod-to-pod-connectivity)
+- [Multi-cluster NetworkPolicy](#multi-cluster-networkpolicy)
+ - [Egress Rule to Multi-cluster Service](#egress-rule-to-multi-cluster-service)
+ - [Ingress Rule](#ingress-rule)
+- [ClusterNetworkPolicy Replication](#clusternetworkpolicy-replication)
+- [Build Antrea Multi-cluster Controller Image](#build-antrea-multi-cluster-controller-image)
+- [Uninstallation](#uninstallation)
+ - [Remove a Member Cluster](#remove-a-member-cluster)
+ - [Remove a Leader Cluster](#remove-a-leader-cluster)
+- [Known Issue](#known-issue)
+
+
+Antrea Multi-cluster implements [Multi-cluster Service API](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api),
+which allows users to create multi-cluster Services that can be accessed cross
+clusters in a ClusterSet. Antrea Multi-cluster also extends Antrea-native
+NetworkPolicy to support Multi-cluster NetworkPolicy rules that apply to
+cross-cluster traffic, and ClusterNetworkPolicy replication that allows a
+ClusterSet admin to create ClusterNetworkPolicies which are replicated across
+the entire ClusterSet and enforced in all member clusters. Antrea Multi-cluster
+was first introduced in Antrea v1.5.0. In Antrea v1.7.0, the Multi-cluster
+Gateway feature was added that supports routing multi-cluster Service traffic
+through tunnels among clusters. The ClusterNetworkPolicy replication feature is
+supported since Antrea v1.6.0, and Multi-cluster NetworkPolicy rules are
+supported since Antrea v1.10.0.
+
+Antrea v1.13 promoted the ClusterSet CRD version from v1alpha1 to v1alpha2. If you
+plan to upgrade from a previous version to v1.13 or later, please check
+the [upgrade guide](./upgrade.md#upgrade-from-a-version-prior-to-v113).
+
+## Quick Start
+
+Please refer to the [Quick Start Guide](quick-start.md) to learn how to build a
+ClusterSet with two clusters quickly.
+
+## Installation
+
+In this guide, all Multi-cluster installation and ClusterSet configuration are
+done by applying Antrea Multi-cluster YAML manifests. Actually, all operations
+can also be done with `antctl` Multi-cluster commands, which may be more
+convenient in many cases. You can refer to the [Quick Start Guide](quick-start.md)
+and [antctl Guide](antctl.md) to learn how to use the Multi-cluster commands.
+
+### Preparation
+
+We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea
+version is set to an environment variable `TAG`. For example, the following
+command sets the Antrea version to `v1.8.0`.
+
+```bash
+export TAG=v1.8.0
+```
+
+To use the latest version of Antrea Multi-cluster from the Antrea main branch,
+you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/`
+when applying or downloading an Antrea YAML manifest.
+
+[Multi-cluster Services](#multi-cluster-service) and
+[multi-cluster Pod-to-Pod connectivity](#multi-cluster-pod-to-pod-connectivity),
+in particular configuration (please check the corresponding sections to learn more
+information), requires an Antrea Multi-cluster Gateway to be set up in each member
+cluster by default to route Service and Pod traffic across clusters. To support
+Multi-cluster Gateways, `antrea-agent` must be deployed with the `Multicluster`
+feature enabled in a member cluster. You can set the following configuration parameters
+in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster`
+feature:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: "" # Change to the Namespace where antrea-mc-controller is deployed.
+```
+
+In order for Multi-cluster features to work, it is necessary for `enableGateway` to be set to true by
+the user, except when Pod-to-Pod direct connectivity already exists (e.g., provided by the cloud provider)
+and `endpointIPType` is configured as `PodIP`. Details can be found in [Multi-cluster Services](#multi-cluster-service).
+Please note that [Multi-cluster NetworkPolicy](#multi-cluster-networkpolicy) always requires
+Gateway.
+
+Prior to Antrea v1.11.0, Multi-cluster Gateway only works with Antrea `encap` traffic
+mode, and all member clusters in a ClusterSet must use the same tunnel type. Since
+Antrea v1.11.0, Multi-cluster Gateway also works with the Antrea `noEncap`, `hybrid`
+and `networkPolicyOnly` modes. For `noEncap` and `hybrid` modes, Antrea Multi-cluster
+deployment is the same as `encap` mode. For `networkPolicyOnly` mode, we need extra
+Antrea configuration changes to support Multi-cluster Gateway. Please check
+[the deployment guide](./policy-only-mode.md) for more information. When using
+Multi-cluster Gateway, it is not possible to enable WireGuard for inter-Node
+traffic within the same member cluster. It is however possible to [enable
+WireGuard for cross-cluster traffic](#multi-cluster-wireguard-encryption)
+between member clusters.
+
+### Deploy Antrea Multi-cluster Controller
+
+A Multi-cluster ClusterSet is comprised of a single leader cluster and at least
+two member clusters. Antrea Multi-cluster Controller needs to be deployed in the
+leader and all member clusters. A cluster can serve as the leader, and meanwhile
+also be a member cluster of the ClusterSet. To deploy Multi-cluster Controller
+in a dedicated leader cluster, please refer to [Deploy in a Dedicated Leader
+cluster](#deploy-in-a-dedicated-leader-cluster). To deploy Multi-cluster
+Controller in a member cluster, please refer to [Deploy in a Member Cluster](#deploy-in-a-member-cluster).
+To deploy Multi-cluster Controller in a dual-role cluster, please refer to
+[Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster).
+
+#### Deploy in a Dedicated Leader Cluster
+
+Since Antrea v1.14.0, you can run the following command to install Multi-cluster Controller
+in the leader cluster. Multi-cluster Controller is deployed into a Namespace. You must
+create the Namespace first, and then apply the deployment manifest in the Namespace.
+
+For a version older than v1.14, please check the user guide document of the version:
+`https://github.com/antrea-io/antrea/blob/release-$version/docs/multicluster/user-guide.md`,
+where `$version` can be `1.12`, `1.13` etc.
+
+ ```bash
+ kubectl create ns antrea-multicluster
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml
+ ```
+
+The Multi-cluster Controller in the leader cluster will be deployed in Namespace `antrea-multicluster`
+by default. If you'd like to use another Namespace, you can change `antrea-multicluster` to the desired
+Namespace in `antrea-multicluster-leader-namespaced.yml`, for example:
+
+```bash
+kubectl create ns ''
+curl -L https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml > antrea-multicluster-leader-namespaced.yml
+sed 's/antrea-multicluster//g' antrea-multicluster-leader-namespaced.yml | kubectl apply -f -
+```
+
+#### Deploy in a Member Cluster
+
+You can run the following command to install Multi-cluster Controller in a
+member cluster. The command will run the controller in the "member" mode in the
+`kube-system` Namespace. If you want to use a different Namespace other than
+`kube-system`, you can edit `antrea-multicluster-member.yml` and change
+`kube-system` to the desired Namespace.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+#### Deploy Leader and Member in One Cluster
+
+We need to run two instances of Multi-cluster Controller in the dual-role
+cluster, one in leader mode and another in member mode.
+
+1. Follow the steps in section [Deploy in a Dedicated Leader Cluster](#deploy-in-a-dedicated-leader-cluster)
+ to deploy the leader controller and import the Multi-cluster CRDs.
+2. Follow the steps in section [Deploy in a Member Cluster](#deploy-in-a-member-cluster)
+ to deploy the member controller.
+
+### Create ClusterSet
+
+An Antrea Multi-cluster ClusterSet should include at least one leader cluster
+and two member clusters. As an example, in the following sections we will create
+a ClusterSet `test-clusterset` which has two member clusters with cluster ID
+`test-cluster-east` and `test-cluster-west` respectively, and one leader cluster
+with ID `test-cluster-north`. Please note that the name of a ClusterSet CR must
+match the ClusterSet ID. In all the member and leader clusters of a ClusterSet,
+the ClusterSet CR must use the ClusterSet ID as the name, e.g. `test-clusterset`
+in the example of this guide.
+
+#### Set up Access to Leader Cluster
+
+We first need to set up access to the leader cluster's API server for all member
+clusters. We recommend creating one ServiceAccount for each member for
+fine-grained access control.
+
+The Multi-cluster Controller deployment manifest for a leader cluster also creates
+a default member cluster token. If you prefer to use the default token, you can skip
+step 1 and replace the Secret name `member-east-token` to the default token Secret
+`antrea-mc-member-access-token` in step 2.
+
+1. Apply the following YAML manifest in the leader cluster to set up access for
+ `test-cluster-east`:
+
+ ```yml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: member-east
+ namespace: antrea-multicluster
+ ---
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: member-east-token
+ namespace: antrea-multicluster
+ annotations:
+ kubernetes.io/service-account.name: member-east
+ type: kubernetes.io/service-account-token
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: RoleBinding
+ metadata:
+ name: member-east
+ namespace: antrea-multicluster
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: antrea-mc-member-cluster-role
+ subjects:
+ - kind: ServiceAccount
+ name: member-east
+ namespace: antrea-multicluster
+ ```
+
+2. Generate the token Secret manifest from the leader cluster, and create a
+ Secret with the manifest in member cluster `test-cluster-east`, e.g.:
+
+ ```bash
+ # Generate the file 'member-east-token.yml' from your leader cluster
+ kubectl get secret member-east-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > member-east-token.yml
+ # Apply 'member-east-token.yml' to the member cluster.
+ kubectl apply -f member-east-token.yml --kubeconfig=/path/to/kubeconfig-of-member-test-cluster-east
+ ```
+
+3. Replace all `east` to `west` and repeat step 1/2 for the other member cluster
+ `test-cluster-west`.
+
+#### Initialize ClusterSet
+
+In all clusters, a `ClusterSet` CR must be created to define the ClusterSet and claim the
+cluster is a member of the ClusterSet.
+
+- Create `ClusterSet` in the leader cluster `test-cluster-north` with the following YAML
+ manifest (you can also refer to [leader-clusterset-template.yml](../../multicluster/config/samples/clusterset_init/leader-clusterset-template.yml)):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: antrea-multicluster
+spec:
+ clusterID: test-cluster-north
+ leaders:
+ - clusterID: test-cluster-north
+```
+
+- Create `ClusterSet` in member cluster `test-cluster-east` with the following
+YAML manifest (you can also refer to [member-clusterset-template.yml](../../multicluster/config/samples/clusterset_init/member-clusterset-template.yml)):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-east
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-east-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+Note: update `server: "https://172.18.0.1:6443"` in the `ClusterSet` spec to the
+correct leader cluster API server address.
+
+- Create `ClusterSet` in member cluster `test-cluster-west`:
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-west
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-west-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+#### Initialize ClusterSet for a Dual-role Cluster
+
+If you want to make the leader cluster `test-cluster-north` also a member
+cluster of the ClusterSet, make sure you follow the steps in [Deploy Leader and
+Member in One Cluster](#deploy-leader-and-member-in-one-cluster) and repeat the
+steps in [Set up Access to Leader Cluster](#set-up-access-to-leader-cluster) as
+well (don't forget replace all `east` to `north` when you repeat the steps).
+
+Then create the `ClusterSet` CR in cluster `test-cluster-north` in the
+`kube-system` Namespace (where the member Multi-cluster Controller runs):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-north
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-north-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+## Multi-cluster Gateway Configuration
+
+Multi-cluster Gateways are responsible for establishing tunnels between clusters.
+Each member cluster should have one Node serving as its Multi-cluster Gateway.
+Multi-cluster Service traffic is routed among clusters through the tunnels between
+Gateways.
+
+Below is a table about communication support for different configurations.
+
+| Pod-to-Pod connectivity provided by underlay | Gateway Enabled | MC EndpointTypes | Cross-cluster Service/Pod communications |
+| -------------------------------------------- | --------------- | ----------------- | ---------------------------------------- |
+| No | No | N/A | No |
+| Yes | No | PodIP | Yes |
+| No | Yes | PodIP/ClusterIP | Yes |
+| Yes | Yes | PodIP/ClusterIP | Yes |
+
+After a member cluster joins a ClusterSet, and the `Multicluster` feature is
+enabled on `antrea-agent`, you can select a Node of the cluster to serve as
+the Multi-cluster Gateway by adding an annotation:
+`multicluster.antrea.io/gateway=true` to the K8s Node. For example, you can run
+the following command to annotate Node `node-1` as the Multi-cluster Gateway:
+
+```bash
+kubectl annotate node node-1 multicluster.antrea.io/gateway=true
+```
+
+You can annotate multiple Nodes in a member cluster as the candidates for
+Multi-cluster Gateway, but only one Node will be selected as the active Gateway.
+Before Antrea v1.9.0, the Gateway Node is just randomly selected and will never
+change unless the Node or its `gateway` annotation is deleted. Starting with
+Antrea v1.9.0, Antrea Multi-cluster Controller will guarantee a "ready" Node
+is selected as the Gateway, and when the current Gateway Node's status changes
+to not "ready", Antrea will try selecting another "ready" Node from the
+candidate Nodes to be the Gateway.
+
+Once a Gateway Node is decided, Multi-cluster Controller in the member cluster
+will create a `Gateway` CR with the same name as the Node. You can check it with
+command:
+
+```bash
+$ kubectl get gateway -n kube-system
+NAME GATEWAY IP INTERNAL IP AGE
+node-1 10.17.27.55 10.17.27.55 10s
+```
+
+`internalIP` of the Gateway is used for the tunnels between the Gateway Node and
+other Nodes in the local cluster, while `gatewayIP` is used for the tunnels to
+remote Gateways of other member clusters. Multi-cluster Controller discovers the
+IP addresses from the K8s Node resource of the Gateway Node. It will always use
+`InternalIP` of the K8s Node as the Gateway's `internalIP`. For `gatewayIP`,
+there are several possibilities:
+
+* By default, the K8s Node's `InternalIP` is used as `gatewayIP` too.
+* You can choose to use the K8s Node's `ExternalIP` as `gatewayIP`, by changing
+the configuration option `gatewayIPPrecedence` to value: `external`, when
+deploying the member Multi-cluster Controller. The configration option is
+defined in ConfigMap `antrea-mc-controller-config` in `antrea-multicluster-member.yml`.
+* When the Gateway Node has a separate IP for external communication or is
+associated with a public IP (e.g. an Elastic IP on AWS), but the IP is not added
+to the K8s Node, you can still choose to use the IP as `gatewayIP`, by adding an
+annotation: `multicluster.antrea.io/gateway-ip=` to the K8s Node.
+
+When choosing a candidate Node for Multi-cluster Gateway, you need to make sure
+the resulted `gatewayIP` can be reached from the remote Gateways. You may need
+to [configure firewall or security groups](../network-requirements.md) properly
+to allow the tunnels between Gateway Nodes. As of now, only IPv4 Gateway IPs are
+supported.
+
+After the Gateway is created, Multi-cluster Controller will be responsible
+for exporting the cluster's network information to other member clusters
+through the leader cluster, including the cluster's Gateway IP and Service
+CIDR. Multi-cluster Controller will try to discover the cluster's Service CIDR
+automatically, but you can also manually specify the `serviceCIDR` option in
+ConfigMap `antrea-mc-controller-config`. In other member clusters, a
+ClusterInfoImport CR will be created for the cluster which includes the
+exported network information. For example, in cluster `test-cluster-west`, you
+you can see a ClusterInfoImport CR with name `test-cluster-east-clusterinfo`
+is created for cluster `test-cluster-east`:
+
+```bash
+$ kubectl get clusterinfoimport -n kube-system
+NAME CLUSTER ID SERVICE CIDR AGE
+test-cluster-east-clusterinfo test-cluster-east 110.96.0.0/20 10s
+```
+
+Make sure you repeat the same step to assign a Gateway Node in all member
+clusters. Once you confirm that all `Gateway` and `ClusterInfoImport` are
+created correctly, you can follow the [Multi-cluster Service](#multi-cluster-service)
+section to create multi-cluster Services and verify cross-cluster Service
+access.
+
+### Multi-cluster WireGuard Encryption
+
+Since Antrea v1.12.0, Antrea Multi-cluster supports WireGuard tunnel between
+member clusters. If WireGuard is enabled, the WireGuard interface and routes
+will be created by Antrea Agent on the Gateway Node, and all cross-cluster
+traffic will be encrypted and forwarded to the WireGuard tunnel.
+
+Please note that WireGuard encryption requires the `wireguard` kernel module be
+present on the Kubernetes Nodes. `wireguard` module is part of mainline kernel
+since Linux 5.6. Or, you can compile the module from source code with a kernel
+version >= 3.10. [This WireGuard installation guide](https://www.wireguard.com/install)
+documents how to install WireGuard together with the kernel module on various
+operating systems.
+
+To enable the WireGuard encryption, the `TrafficEncryptMode`
+in Multi-cluster configuration should be set to `wireGuard` and the `enableGateway`
+field should be set to `true` as follows:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ trafficEncryptionMode: "wireGuard"
+ wireGuard:
+ port: 51821
+```
+
+When WireGuard encryption is enabled for cross-cluster traffic as part of the
+Multi-cluster feature, in-cluster encryption (for traffic within a given member
+cluster) is no longer supported, not even with IPsec.
+
+## Multi-cluster Service
+
+After you set up a ClusterSet properly, you can create a `ServiceExport` CR to
+export a Service from one cluster to other clusters in the Clusterset, like the
+example below:
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceExport
+metadata:
+ name: nginx
+ namespace: default
+```
+
+For example, once you export the `default/nginx` Service in member cluster
+`test-cluster-west`, it will be automatically imported in member cluster
+`test-cluster-east`. A Service and an Endpoints with name
+`default/antrea-mc-nginx` will be created in `test-cluster-east`, as well as
+a ServcieImport CR with name `default/nginx`. Now, Pods in `test-cluster-east`
+can access the imported Service using its ClusterIP, and the requests will be
+routed to the backend `nginx` Pods in `test-cluster-west`. You can check the
+imported Service and ServiceImport with commands:
+
+```bash
+$ kubectl get serviceimport antrea-mc-nginx -n default
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+antrea-mc-nginx ClusterIP 10.107.57.62 443/TCP 10s
+
+$ kubectl get serviceimport nginx -n default
+NAME TYPE IP AGE
+nginx ClusterSetIP ["10.19.57.62"] 10s
+```
+
+As part of the Service export/import process, in the leader cluster, two
+ResourceExport CRs will be created in the Multi-cluster Controller Namespace,
+for the exported Service and Endpoints respectively, as well as two
+ResourceImport CRs. You can check them in the leader cluster with commands:
+
+```bash
+$ kubectl get resourceexport -n antrea-multicluster
+NAME CLUSTER ID KIND NAMESPACE NAME AGE
+test-cluster-west-default-nginx-endpoints test-cluster-west Endpoints default nginx 30s
+test-cluster-west-default-nginx-service test-cluster-west Service default nginx 30s
+
+$ kubectl get resourceimport -n antrea-multicluster
+NAME KIND NAMESPACE NAME AGE
+default-nginx-endpoints Endpoints default nginx 99s
+default-nginx-service ServiceImport default nginx 99s
+```
+
+When there is any new change on the exported Service, the imported multi-cluster
+Service resources will be updated accordingly. Multiple member clusters can
+export the same Service (with the same name and Namespace). In this case, the
+imported Service in a member cluster will include endpoints from all the export
+clusters, and the Service requests will be load-balanced to all these clusters.
+Even when the client Pod's cluster also exported the Service, the Service
+requests may be routed to other clusters, and the endpoints from the local
+cluster do not take precedence. A Service cannot have conflicted definitions in
+different export clusters, otherwise only the first export will be replicated to
+other clusters; other exports as well as new updates to the Service will be
+ingored, until user fixes the conflicts. For example, after a member cluster
+exported a Service: `default/nginx` with TCP Port `80`, other clusters can only
+export the same Service with the same Ports definition including Port names. At
+the moment, Antrea Multi-cluster supports only IPv4 multi-cluster Services.
+
+By default, a multi-cluster Service will use the exported Services' ClusterIPs (the
+original Service ClusterIPs in the export clusters) as Endpoints. Since Antrea
+v1.9.0, Antrea Multi-cluster also supports using the backend Pod IPs as the
+multi-cluster Service endpoints. You can change the value of configuration option
+`endpointIPType` in ConfigMap `antrea-mc-controller-config` from `ClusterIP`
+to `PodIP` to use Pod IPs as endpoints. All member clusters in a ClusterSet should
+use the same endpoint type. Existing ServiceExports should be re-exported after
+changing `endpointIPType`. `ClusterIP` type requires that Service CIDRs (ClusterIP
+ranges) must not overlap among member clusters, and always requires Multi-cluster
+Gateways to be configured. `PodIP` type requires Pod CIDRs not to overlap among
+clusters, and it also requires Multi-cluster Gateways when there is no direct Pod-to-Pod
+connectivity across clusters. Also refer to [Multi-cluster Pod-to-Pod Connectivity](#multi-cluster-pod-to-pod-connectivity)
+for more information.
+
+## Multi-cluster Pod-to-Pod Connectivity
+
+Since Antrea v1.9.0, Multi-cluster supports routing Pod traffic across clusters
+through Multi-cluster Gateways. Pod IPs can be reached in all member clusters
+within a ClusterSet. To enable this feature, the cluster's Pod CIDRs must be set
+in ConfigMap `antrea-mc-controller-config` of each member cluster and
+`multicluster.enablePodToPodConnectivity` must be set to `true` in the `antrea-agent`
+configuration.
+Note, **Pod CIDRs must not overlap among clusters to enable cross-cluster
+Pod-to-Pod connectivity**.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ app: antrea
+ name: antrea-mc-controller-config
+ namespace: kube-system
+data:
+ controller_manager_config.yaml: |
+ apiVersion: multicluster.crd.antrea.io/v1alpha1
+ kind: MultiClusterConfig
+ podCIDRs:
+ - "10.10.1.1/16"
+```
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enablePodToPodConnectivity: true
+```
+
+You can edit [antrea-multicluster-member.yml](../../multicluster/build/yamls/antrea-multicluster-member.yml),
+or use `kubectl edit` to change the ConfigMap:
+
+```bash
+kubectl edit configmap -n kube-system antrea-mc-controller-config
+```
+
+Normally, `podCIDRs` should be the value of `kube-controller-manager`'s
+`cluster-cidr` option. If it's left empty, the Pod-to-Pod connectivity feature
+will not be enabled. If you use `kubectl edit` to edit the ConfigMap, then you
+need to restart the `antrea-mc-controller` Pod to load the latest configuration.
+
+## Multi-cluster NetworkPolicy
+
+Antrea-native policies can be enforced on cross-cluster traffic in a ClusterSet.
+To enable Multi-cluster NetworkPolicy features, check the Antrea Controller and
+Agent ConfigMaps and make sure that `enableStretchedNetworkPolicy` is set to
+`true` in addition to enabling the `multicluster` feature gate:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableStretchedNetworkPolicy: true # required by both egress and ingres rules
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ enableStretchedNetworkPolicy: true # required by only ingress rules
+ namespace: ""
+```
+
+### Egress Rule to Multi-cluster Service
+
+Restricting Pod egress traffic to backends of a Multi-cluster Service (which can be on the
+same cluster of the source Pod or on a different cluster) is supported by Antrea-native
+policy's `toServices` feature in egress rules. To define such a policy, simply put the exported
+Service name and Namespace in the `toServices` field of an Antrea-native policy, and set `scope`
+of the `toServices` peer to `ClusterSet`:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-drop-tenant-to-secured-mc-service
+spec:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: tenant
+ egress:
+ - action: Drop
+ toServices:
+ - name: secured-service # an exported Multi-cluster Service
+ namespace: svcNamespace
+ scope: ClusterSet
+```
+
+The `scope` field of `toServices` rules is supported since Antrea v1.10. For earlier versions
+of Antrea, an equivalent rule can be written by not specifying `scope` and providing the
+imported Service name instead (i.e. `antrea-mc-[svcName]`).
+
+Note that the scope of policy's `appliedTo` field will still be restricted to the cluster
+where the policy is created in. To enforce such a policy for all `role=tenant` Pods in the
+entire ClusterSet, use the [ClusterNetworkPolicy Replication](#clusternetworkpolicy-replication)
+feature described in the later section, and set the `clusterNetworkPolicy` field of
+the ResourceExport to the `acnp-drop-tenant-to-secured-mc-service` spec above. Such
+replication should only be performed by ClusterSet admins, who have clearance of creating
+ClusterNetworkPolicies in all clusters of a ClusterSet.
+
+### Ingress Rule
+
+Antrea-native policies now support selecting ingress peers in the ClusterSet scope (since v1.10.0).
+Policy rules can be created to enforce security postures on ingress traffic from all member
+clusters in a ClusterSet:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: drop-tenant-access-to-admin-namespace
+spec:
+ appliedTo:
+ - namespaceSelector:
+ matchLabels:
+ role: admin
+ priority: 1
+ tier: securityops
+ ingress:
+ - action: Deny
+ from:
+ # Select all Pods in role=tenant Namespaces in the ClusterSet
+ - scope: ClusterSet
+ namespaceSelector:
+ matchLabels:
+ role: tenant
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: db-svc-allow-ingress-from-client-only
+ namespace: prod-us-west
+spec:
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: db
+ priority: 1
+ tier: application
+ ingress:
+ - action: Allow
+ from:
+ # Select all Pods in Namespace "prod-us-west" from all clusters in the ClusterSet (if the
+ # Namespace exists in that cluster) whose labels match app=client
+ - scope: ClusterSet
+ podSelector:
+ matchLabels:
+ app: client
+ - action: Deny
+```
+
+As shown in the examples above, setting `scope` to `ClusterSet` expands the
+scope of the `podSelector` or `namespaceSelector` of an ingress peer to the
+entire ClusterSet that the policy is created in. Similar to egress rules, the
+scope of an ingress rule's `appliedTo` is still restricted to the local cluster.
+
+To use the ingress cross-cluster NetworkPolicy feature, the `enableStretchedNetworkPolicy`
+option needs to be set to `true` in `antrea-mc-controller-config`, for each `antrea-mc-controller`
+running in the ClusterSet. Refer to the [previous section](#multi-cluster-pod-to-pod-connectivity)
+on how to change the ConfigMap:
+
+```yaml
+ controller_manager_config.yaml: |
+ apiVersion: multicluster.crd.antrea.io/v1alpha1
+ kind: MultiClusterConfig
+ enableStretchedNetworkPolicy: true
+```
+
+Note that currently ingress stretched NetworkPolicy only works with the Antrea `encap`
+traffic mode.
+
+## ClusterNetworkPolicy Replication
+
+Since Antrea v1.6.0, Multi-cluster admins can specify certain
+ClusterNetworkPolicies to be replicated and enforced across the entire
+ClusterSet. This is especially useful for ClusterSet admins who want all
+clusters in the ClusterSet to be applied with a consistent security posture (for
+example, all Namespaces in all clusters can only communicate with Pods in their
+own Namespaces). For more information regarding Antrea ClusterNetworkPolicy
+(ACNP), please refer to [this document](../antrea-network-policy.md).
+
+To achieve such ACNP replication across clusters, admins can, in the leader
+cluster of a ClusterSet, create a `ResourceExport` CR of kind
+`AntreaClusterNetworkPolicy` which contains the ClusterNetworkPolicy spec
+they wish to be replicated. The `ResourceExport` should be created in the
+Namespace where the ClusterSet's leader Multi-cluster Controller runs.
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha1
+kind: ResourceExport
+metadata:
+ name: strict-namespace-isolation-for-test-clusterset
+ namespace: antrea-multicluster # Namespace that Multi-cluster Controller is deployed
+spec:
+ kind: AntreaClusterNetworkPolicy
+ name: strict-namespace-isolation # In each importing cluster, an ACNP of name antrea-mc-strict-namespace-isolation will be created with the spec below
+ clusterNetworkPolicy:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - namespaceSelector: {} # Selects all Namespaces in the member cluster
+ ingress:
+ - action: Pass
+ from:
+ - namespaces:
+ match: Self # Skip drop rule for traffic from Pods in the same Namespace
+ - podSelector:
+ matchLabels:
+ k8s-app: kube-dns # Skip drop rule for traffic from the core-dns components
+ - action: Drop
+ from:
+ - namespaceSelector: {} # Drop from Pods from all other Namespaces
+```
+
+The above sample spec will create an ACNP in each member cluster which
+implements strict Namespace isolation for that cluster.
+
+Note that because the Tier that an ACNP refers to must exist before the ACNP is applied, an importing
+cluster may fail to create the ACNP to be replicated, if the Tier in the ResourceExport spec cannot be
+found in that particular cluster. If there are such failures, the ACNP creation status of failed member
+clusters will be reported back to the leader cluster as K8s Events, and can be checked by describing
+the `ResourceImport` of the original `ResourceExport`:
+
+```bash
+$ kubectl describe resourceimport -A
+Name: strict-namespace-isolation-antreaclusternetworkpolicy
+Namespace: antrea-multicluster
+API Version: multicluster.crd.antrea.io/v1alpha1
+Kind: ResourceImport
+Spec:
+ Clusternetworkpolicy:
+ Applied To:
+ Namespace Selector:
+ Ingress:
+ Action: Pass
+ Enable Logging: false
+ From:
+ Namespaces:
+ Match: Self
+ Pod Selector:
+ Match Labels:
+ k8s-app: kube-dns
+ Action: Drop
+ Enable Logging: false
+ From:
+ Namespace Selector:
+ Priority: 1
+ Tier: random
+ Kind: AntreaClusterNetworkPolicy
+ Name: strict-namespace-isolation
+ ...
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning ACNPImportFailed 2m11s resourceimport-controller ACNP Tier random does not exist in the importing cluster test-cluster-west
+```
+
+In future releases, some additional tooling may become available to automate the
+creation of ResourceExports for ACNPs, and provide a user-friendly way to define
+Multi-cluster NetworkPolicies to be enforced in the ClusterSet.
+
+## Build Antrea Multi-cluster Controller Image
+
+If you'd like to build Multi-cluster Controller Docker image locally, you can
+follow the following steps:
+
+1. Go to your local `antrea` source tree, run `make build-antrea-mc-controller`, and you
+will get a new image named `antrea/antrea-mc-controller:latest` locally.
+2. Run `docker save antrea/antrea-mc-controller:latest > antrea-mcs.tar` to save
+the image.
+3. Copy the image file `antrea-mcs.tar` to the Nodes of your local cluster.
+4. Run `docker load < antrea-mcs.tar` in each Node of your local cluster.
+
+## Uninstallation
+
+### Remove a Member Cluster
+
+If you want to remove a member cluster from a ClusterSet and uninstall Antrea
+Multi-cluster, please follow the following steps.
+
+Note: please replace `kube-system` with the right Namespace in the example
+commands and manifest if Antrea Multi-cluster is not deployed in
+the default Namespace.
+
+1. Delete all ServiceExports and the Multi-cluster Gateway annotation on the
+Gateway Nodes.
+
+2. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be
+responsible for cleaning up all resources created by itself automatically.
+
+3. Delete the Antrea Multi-cluster Deployment:
+
+```bash
+kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+### Remove a Leader Cluster
+
+If you want to delete a ClusterSet and uninstall Antrea Multi-cluster in
+a leader cluster, please follow the following steps. You should first
+[remove all member clusters](#remove-a-member-cluster) before removing
+a leader cluster from a ClusterSet.
+
+Note: please replace `antrea-multicluster` with the right Namespace in the
+following example commands and manifest if Antrea Multi-cluster is not
+deployed in the default Namespace.
+
+1. Delete AntreaClusterNetworkPolicy ResourceExports in the leader cluster.
+
+2. Verify that there is no remaining MemberClusterAnnounces.
+
+ ```bash
+ kubectl get memberclusterannounce -n antrea-multicluster
+ ```
+
+3. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be
+responsible for cleaning up all resources created by itself automatically.
+
+4. Check there is no remaining ResourceExports and ResourceImports:
+
+ ```bash
+ kubectl get resourceexports -n antrea-multicluster
+ kubectl get resourceimports -n antrea-multicluster
+ ```
+
+ Note: you can follow the [Known Issue section](#known-issue) to delete the left-over ResourceExports.
+
+5. Delete the Antrea Multi-cluster Deployment:
+
+ ```bash
+ kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml
+ ```
+
+## Known Issue
+
+We recommend user to redeploy or update Antrea Multi-cluster Controller through
+`kubectl apply`. If you are using `kubectl delete -f *` and `kubectl create -f *`
+to redeploy Controller in the leader cluster, you might encounter [a known issue](https://github.com/kubernetes/kubernetes/issues/60538)
+in `ResourceExport` CRD cleanup. To avoid this issue, please delete any
+`ResourceExport` CRs in the leader cluster first, and make sure
+`kubectl get resourceexport -A` returns empty result before you can redeploy
+Multi-cluster Controller.
+
+All `ResourceExports` can be deleted with the following command:
+
+```bash
+kubectl get resourceexport -A -o json | jq -r '.items[]|[.metadata.namespace,.metadata.name]|join(" ")' | xargs -n2 bash -c 'kubectl delete -n $0 resourceexport/$1'
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/network-flow-visibility.md b/content/docs/v2.2.0-alpha.2/docs/network-flow-visibility.md
new file mode 100644
index 00000000..201abb3c
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/network-flow-visibility.md
@@ -0,0 +1,673 @@
+# Network Flow Visibility in Antrea
+
+## Table of Contents
+
+
+- [Overview](#overview)
+- [Flow Exporter](#flow-exporter)
+ - [Configuration](#configuration)
+ - [Configuration pre Antrea v1.13](#configuration-pre-antrea-v113)
+ - [IPFIX Information Elements (IEs) in a Flow Record](#ipfix-information-elements-ies-in-a-flow-record)
+ - [IEs from IANA-assigned IE Registry](#ies-from-iana-assigned-ie-registry)
+ - [IEs from Reverse IANA-assigned IE Registry](#ies-from-reverse-iana-assigned-ie-registry)
+ - [IEs from Antrea IE Registry](#ies-from-antrea-ie-registry)
+ - [Supported Capabilities](#supported-capabilities)
+ - [Types of Flows and Associated Information](#types-of-flows-and-associated-information)
+ - [Connection Metrics](#connection-metrics)
+- [Flow Aggregator](#flow-aggregator)
+ - [Deployment](#deployment)
+ - [Configuration](#configuration-1)
+ - [Configuring secure connections to the ClickHouse database](#configuring-secure-connections-to-the-clickhouse-database)
+ - [Example of flow-aggregator.conf](#example-of-flow-aggregatorconf)
+ - [IPFIX Information Elements (IEs) in an Aggregated Flow Record](#ipfix-information-elements-ies-in-an-aggregated-flow-record)
+ - [IEs from Antrea IE Registry](#ies-from-antrea-ie-registry-1)
+ - [Supported Capabilities](#supported-capabilities-1)
+ - [Storage of Flow Records](#storage-of-flow-records)
+ - [Correlation of Flow Records](#correlation-of-flow-records)
+ - [Aggregation of Flow Records](#aggregation-of-flow-records)
+ - [Antctl Support](#antctl-support)
+- [Quick Deployment](#quick-deployment)
+ - [Image-building Steps](#image-building-steps)
+ - [Deployment Steps](#deployment-steps)
+- [Flow Collectors](#flow-collectors)
+ - [Go-ipfix Collector](#go-ipfix-collector)
+ - [Deployment Steps](#deployment-steps-1)
+ - [Output Flow Records](#output-flow-records)
+ - [Grafana Flow Collector (migrated)](#grafana-flow-collector-migrated)
+ - [ELK Flow Collector (removed)](#elk-flow-collector-removed)
+- [Layer 7 Network Flow Exporter](#layer-7-network-flow-exporter)
+ - [Prerequisites](#prerequisites)
+ - [Usage](#usage)
+
+
+## Overview
+
+[Antrea](design/architecture.md) is a Kubernetes network plugin that provides network
+connectivity and security features for Pod workloads. Considering the scale and
+dynamism of Kubernetes workloads in a cluster, Network Flow Visibility helps in
+the management and configuration of Kubernetes resources such as Network Policy,
+Services, Pods etc., and thereby provides opportunities to enhance the performance
+and security aspects of Pod workloads.
+
+For visualizing the network flows, Antrea monitors the flows in Linux conntrack
+module. These flows are converted to flow records, and then flow records are post-processed
+before they are sent to the configured external flow collector. High-level design is given below:
+
+![Antrea Flow Visibility Design](assets/flow_visibility.svg)
+
+## Flow Exporter
+
+In Antrea, the basic building block for the Network Flow Visibility is the **Flow
+Exporter**. Flow Exporter operates within Antrea Agent; it builds and maintains
+a connection store by polling and dumping flows from conntrack module periodically.
+Connections from the connection store are exported to the [Flow Aggregator
+Service](#flow-aggregator) using the IPFIX protocol, and for this purpose we use
+the IPFIX exporter process from the [go-ipfix](https://github.com/vmware/go-ipfix)
+library.
+
+### Configuration
+
+In addition to enabling the Flow Exporter feature gate (if needed), you need to
+ensure that the `flowExporter.enable` flag is set to true in the Antrea Agent
+configuration.
+
+your `antrea-agent` ConfigMap should look like this:
+
+```yaml
+ antrea-agent.conf: |
+ # FeatureGates is a map of feature names to bools that enable or disable experimental features.
+ featureGates:
+ # Enable flowexporter which exports polled conntrack connections as IPFIX flow records from each agent to a configured collector.
+ FlowExporter: true
+
+ flowExporter:
+ # Enable FlowExporter, a feature used to export polled conntrack connections as
+ # IPFIX flow records from each agent to a configured collector. To enable this
+ # feature, you need to set "enable" to true, and ensure that the FlowExporter
+ # feature gate is also enabled.
+ enable: true
+ # Provide the IPFIX collector address as a string with format :[][:].
+ # HOST can either be the DNS name, IP, or Service name of the Flow Collector. If
+ # using an IP, it can be either IPv4 or IPv6. However, IPv6 address should be
+ # wrapped with []. When the collector is running in-cluster as a Service, set
+ # to /. For example,
+ # "flow-aggregator/flow-aggregator" can be provided to connect to the Antrea
+ # Flow Aggregator Service.
+ # If PORT is empty, we default to 4739, the standard IPFIX port.
+ # If no PROTO is given, we consider "tls" as default. We support "tls", "tcp" and
+ # "udp" protocols. "tls" is used for securing communication between flow exporter and
+ # flow aggregator.
+ flowCollectorAddr: "flow-aggregator/flow-aggregator:4739:tls"
+
+ # Provide flow poll interval as a duration string. This determines how often the
+ # flow exporter dumps connections from the conntrack module. Flow poll interval
+ # should be greater than or equal to 1s (one second).
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ flowPollInterval: "5s"
+
+ # Provide the active flow export timeout, which is the timeout after which a flow
+ # record is sent to the collector for active flows. Thus, for flows with a continuous
+ # stream of packets, a flow record will be exported to the collector once the elapsed
+ # time since the last export event is equal to the value of this timeout.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ activeFlowExportTimeout: "5s"
+
+ # Provide the idle flow export timeout, which is the timeout after which a flow
+ # record is sent to the collector for idle flows. A flow is considered idle if no
+ # packet matching this flow has been observed since the last export event.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ idleFlowExportTimeout: "15s"
+```
+
+Please note that the default value for `flowExporter.flowCollectorAddr` is
+`"flow-aggregator/flow-aggregator:4739:tls"`, which enables the Flow Exporter to connect
+the Flow Aggregator Service, assuming it is running in the same K8 cluster with the Name
+and Namespace set to `flow-aggregator`. If you deploy the Flow Aggregator Service with
+a different Name and Namespace, then set `flowExporter.flowCollectorAddr` appropriately.
+
+Please note that the default values for
+`flowExporter.flowPollInterval`, `flowExporter.activeFlowExportTimeout`, and
+`flowExporter.idleFlowExportTimeout` parameters are set to 5s, 5s, and 15s, respectively.
+TLS communication between the Flow Exporter and the Flow Aggregator is enabled by default.
+Please modify them as per your requirements.
+
+#### Configuration pre Antrea v1.13
+
+Prior to the Antrea v1.13 release, the `flowExporter` option group in the
+Antrea Agent configuration did not exist. To enable the Flow Exporter feature,
+one simply needed to enable the feature gate, and the Flow Exporter related
+configuration could be configured using the (now deprecated) `flowCollectorAddr`,
+`flowPollInterval`, `activeFlowExportTimeout`, `idleFlowExportTimeout`
+parameters.
+
+### IPFIX Information Elements (IEs) in a Flow Record
+
+There are 34 IPFIX IEs in each exported flow record, which are defined in the
+IANA-assigned IE registry, the Reverse IANA-assigned IE registry and the Antrea
+IE registry. The reverse IEs are used to provide bi-directional information about
+the flow. The Enterprise ID is 0 for IANA-assigned IE registry, 29305 for reverse
+IANA IE registry, 56505 for Antrea IE registry. All the IEs used by the Antrea
+Flow Exporter are listed below:
+
+#### IEs from IANA-assigned IE Registry
+
+| IPFIX Information Element| Field ID | Type |
+|--------------------------|----------|----------------|
+| flowStartSeconds | 150 | dateTimeSeconds|
+| flowEndSeconds | 151 | dateTimeSeconds|
+| flowEndReason | 136 | unsigned8 |
+| sourceIPv4Address | 8 | ipv4Address |
+| destinationIPv4Address | 12 | ipv4Address |
+| sourceIPv6Address | 27 | ipv6Address |
+| destinationIPv6Address | 28 | ipv6Address |
+| sourceTransportPort | 7 | unsigned16 |
+| destinationTransportPort | 11 | unsigned16 |
+| protocolIdentifier | 4 | unsigned8 |
+| packetTotalCount | 86 | unsigned64 |
+| octetTotalCount | 85 | unsigned64 |
+| packetDeltaCount | 2 | unsigned64 |
+| octetDeltaCount | 1 | unsigned64 |
+
+#### IEs from Reverse IANA-assigned IE Registry
+
+| IPFIX Information Element| Field ID | Type |
+|--------------------------|----------|----------------|
+| reversePacketTotalCount | 86 | unsigned64 |
+| reverseOctetTotalCount | 85 | unsigned64 |
+| reversePacketDeltaCount | 2 | unsigned64 |
+| reverseOctetDeltaCount | 1 | unsigned64 |
+
+#### IEs from Antrea IE Registry
+
+| IPFIX Information Element | Field ID | Type | Description |
+|----------------------------------|----------|-------------|-------------|
+| sourcePodNamespace | 100 | string | |
+| sourcePodName | 101 | string | |
+| destinationPodNamespace | 102 | string | |
+| destinationPodName | 103 | string | |
+| sourceNodeName | 104 | string | |
+| destinationNodeName | 105 | string | |
+| destinationClusterIPv4 | 106 | ipv4Address | |
+| destinationClusterIPv6 | 107 | ipv6Address | |
+| destinationServicePort | 108 | unsigned16 | |
+| destinationServicePortName | 109 | string | |
+| ingressNetworkPolicyName | 110 | string | Name of the ingress network policy applied to the destination Pod for this flow. |
+| ingressNetworkPolicyNamespace | 111 | string | Namespace of the ingress network policy applied to the destination Pod for this flow. |
+| ingressNetworkPolicyType | 115 | unsigned8 | 1 stands for Kubernetes Network Policy. 2 stands for Antrea Network Policy. 3 stands for Antrea Cluster Network Policy. |
+| ingressNetworkPolicyRuleName | 141 | string | Name of the ingress network policy Rule applied to the destination Pod for this flow. |
+| egressNetworkPolicyName | 112 | string | Name of the egress network policy applied to the source Pod for this flow. |
+| egressNetworkPolicyNamespace | 113 | string | Namespace of the egress network policy applied to the source Pod for this flow. |
+| egressNetworkPolicyType | 118 | unsigned8 | |
+| egressNetworkPolicyRuleName | 142 | string | Name of the egress network policy rule applied to the source Pod for this flow. |
+| ingressNetworkPolicyRuleAction | 139 | unsigned8 | 1 stands for Allow. 2 stands for Drop. 3 stands for Reject. |
+| egressNetworkPolicyRuleAction | 140 | unsigned8 | |
+| tcpState | 136 | string | The state of the TCP connection. The states are: LISTEN, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED. |
+| flowType | 137 | unsigned8 | 1 stands for Intra-Node. 2 stands for Inter-Node. 3 stands for To External. 4 stands for From External. |
+
+### Supported Capabilities
+
+#### Types of Flows and Associated Information
+
+Currently, the Flow Exporter feature provides visibility for Pod-to-Pod, Pod-to-Service
+and Pod-to-External network flows along with the associated statistics such as data
+throughput (bits per second), packet throughput (packets per second), cumulative byte
+count and cumulative packet count. Pod-To-Service flow visibility is supported
+only [when Antrea Proxy enabled](feature-gates.md), which is the case by default
+starting with Antrea v0.11. In the future, we will enable the support for External-To-Service
+flows.
+
+Kubernetes information such as Node name, Pod name, Pod Namespace, Service name,
+NetworkPolicy name and NetworkPolicy Namespace, is added to the flow records.
+Network Policy Rule Action (Allow, Reject, Drop) is also supported for both
+Antrea-native NetworkPolicies and K8s NetworkPolicies. For K8s NetworkPolicies,
+connections dropped due to [isolated Pod behavior](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)
+will be assigned the Drop action.
+For flow records that are exported from any given Antrea Agent, the Flow Exporter
+only provides the information of Kubernetes entities that are local to the Antrea
+Agent. In other words, flow records are only complete for intra-Node flows, but
+incomplete for inter-Node flows. It is the responsibility of the [Flow Aggregator](#flow-aggregator)
+to correlate flows from the source and destination Nodes and produce complete flow
+records.
+
+Both Flow Exporter and Flow Aggregator are supported in IPv4 clusters, IPv6 clusters and dual-stack clusters.
+
+#### Connection Metrics
+
+We support following connection metrics as Prometheus metrics that are exposed
+through [Antrea Agent apiserver endpoint](prometheus-integration.md):
+`antrea_agent_conntrack_total_connection_count`,
+`antrea_agent_conntrack_antrea_connection_count`,
+`antrea_agent_denied_connection_count`,
+`antrea_agent_conntrack_max_connection_count`, and
+`antrea_agent_flow_collector_reconnection_count`
+
+## Flow Aggregator
+
+Flow Aggregator is deployed as a Kubernetes Service. The main functionality of Flow
+Aggregator is to store, correlate and aggregate the flow records received from the
+Flow Exporter of Antrea Agents. More details on the functionality are provided in
+the [Supported Capabilities](#supported-capabilities-1) section.
+
+Flow Aggregator is implemented as IPFIX mediator, which
+consists of IPFIX Collector Process, IPFIX Intermediate Process and IPFIX Exporter
+Process. We use the [go-ipfix](https://github.com/vmware/go-ipfix) library to implement
+the Flow Aggregator.
+
+### Deployment
+
+To deploy a released version of Flow Aggregator Service, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases). For any
+given release `` (e.g. `v0.12.0`), you can deploy Flow Aggregator as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//flow-aggregator.yml
+```
+
+To deploy the latest version of Flow Aggregator Service (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/flow-aggregator.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/flow-aggregator.yml
+```
+
+### Configuration
+
+The following configuration parameters have to be provided through the Flow
+Aggregator ConfigMap. Flow aggregator needs to be configured with at least one
+of the supported [Flow Collectors](#flow-collectors).
+`flowCollector` is mandatory for [go-ipfix collector](#deployment-steps), and
+`clickHouse` is mandatory for [Grafana Flow Collector](#grafana-flow-collector-migrated).
+We provide an example value for this parameter in the following snippet.
+
+* If you have deployed the [go-ipfix collector](#deployment-steps),
+then please set `flowCollector.enable` to `true` and use the address for
+`flowCollector.address`: `::`
+* If you have deployed the [Grafana Flow Collector](#grafana-flow-collector-migrated),
+then please enable the collector by setting `clickHouse.enable` to `true`. If
+it is deployed following the [deployment steps](#deployment-steps-1), the
+ClickHouse server is already exposed via a K8s Service, and no further
+configuration is required. If a different FQDN or IP is desired, please use
+the URL for `clickHouse.databaseURL` in the following format:
+`://:`.
+
+#### Configuring secure connections to the ClickHouse database
+
+Starting with Antrea v1.13, you can enable TLS when connecting to the ClickHouse
+Server by setting `clickHouse.databaseURL` with protocol `tls` or `https`.
+You can also change the value of `clickHouse.tls.insecureSkipVerify` to
+determine whether to skip the verification of the server's certificate.
+If you want to provide a custom CA certificate, you can set
+`clickHouse.tls.caCert` to `true` and the flow Aggregator will read the
+certificate key pair from the`clickhouse-ca` Secret.
+
+Make sure to follow the following form when creating the `clickhouse-ca` Secret
+with the custom CA certificate:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: clickhouse-ca
+ namespace: flow-aggregator
+data:
+ ca.crt:
+```
+
+You can use `kubectl apply -f ` to create the above secret
+, or use `kubectl create secret`:
+
+```bash
+kubectl create secret generic clickhouse-ca -n flow-aggregator --from-file=ca.crt=
+```
+
+Prior to Antrea v1.13, secure connections to ClickHouse are not supported,
+and TCP is the only supported protocol when connecting to the ClickHouse
+server from the Flow Aggregator.
+
+#### Example of flow-aggregator.conf
+
+```yaml
+flow-aggregator.conf: |
+ # Provide the active flow record timeout as a duration string. This determines
+ # how often the flow aggregator exports the active flow records to the flow
+ # collector. Thus, for flows with a continuous stream of packets, a flow record
+ # will be exported to the collector once the elapsed time since the last export
+ # event in the flow aggregator is equal to the value of this timeout.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ activeFlowRecordTimeout: 60s
+
+ # Provide the inactive flow record timeout as a duration string. This determines
+ # how often the flow aggregator exports the inactive flow records to the flow
+ # collector. A flow record is considered to be inactive if no matching record
+ # has been received by the flow aggregator in the specified interval.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ inactiveFlowRecordTimeout: 90s
+
+ # Provide the transport protocol for the flow aggregator collecting process, which is tls, tcp or udp.
+ aggregatorTransportProtocol: "tls"
+
+ # Provide an extra DNS name or IP address of flow aggregator for generating TLS certificate.
+ flowAggregatorAddress: ""
+
+ # recordContents enables configuring some fields in the flow records. Fields can
+ # be excluded to reduce record size, but some features or external tooling may
+ # depend on these fields.
+ recordContents:
+ # Determine whether source and destination Pod labels will be included in the flow records.
+ podLabels: false
+
+ # apiServer contains APIServer related configuration options.
+ apiServer:
+ # The port for the flow-aggregator APIServer to serve on.
+ apiPort: 10348
+
+ # Comma-separated list of Cipher Suites. If omitted, the default Go Cipher Suites will be used.
+ # https://golang.org/pkg/crypto/tls/#pkg-constants
+ # Note that TLS1.3 Cipher Suites cannot be added to the list. But the apiserver will always
+ # prefer TLS1.3 Cipher Suites whenever possible.
+ tlsCipherSuites: ""
+
+ # TLS min version from: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
+ tlsMinVersion: ""
+
+ # flowCollector contains external IPFIX or JSON collector related configuration options.
+ flowCollector:
+ # Enable is the switch to enable exporting flow records to external flow collector.
+ enable: false
+
+ # Provide the flow collector address as string with format :[:], where proto is tcp or udp.
+ # If no L4 transport proto is given, we consider tcp as default.
+ address: ""
+
+ # Provide the 32-bit Observation Domain ID which will uniquely identify this instance of the flow
+ # aggregator to an external flow collector. If omitted, an Observation Domain ID will be generated
+ # from the persistent cluster UUID generated by Antrea. Failing that (e.g. because the cluster UUID
+ # is not available), a value will be randomly generated, which may vary across restarts of the flow
+ # aggregator.
+ #observationDomainID:
+
+ # Provide format for records sent to the configured flow collector.
+ # Supported formats are IPFIX and JSON.
+ recordFormat: "IPFIX"
+
+ # clickHouse contains ClickHouse related configuration options.
+ clickHouse:
+ # Enable is the switch to enable exporting flow records to ClickHouse.
+ enable: false
+
+ # Database is the name of database where Antrea "flows" table is created.
+ database: "default"
+
+ # DatabaseURL is the url to the database. Provide the database URL as a string with format
+ # ://:. The protocol has to be
+ # one of the following: "tcp", "tls", "http", "https". When "tls" or "https" is used, tls
+ # will be enabled.
+ databaseURL: "tcp://clickhouse-clickhouse.flow-visibility.svc:9000"
+
+ # TLS configuration options, when using TLS to connect to the ClickHouse service.
+ tls:
+ # InsecureSkipVerify determines whether to skip the verification of the server's certificate chain and host name.
+ # Default is false.
+ insecureSkipVerify: false
+
+ # CACert indicates whether to use custom CA certificate. Default root CAs will be used if this field is false.
+ # If true, a Secret named "clickhouse-ca" must be provided with the following keys:
+ # ca.crt:
+ caCert: false
+
+ # Debug enables debug logs from ClickHouse sql driver.
+ debug: false
+
+ # Compress enables lz4 compression when committing flow records.
+ compress: true
+
+ # CommitInterval is the periodical interval between batch commit of flow records to DB.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ # The minimum interval is 1s based on ClickHouse documentation for best performance.
+ commitInterval: "8s"
+```
+
+Please note that the default values for `activeFlowRecordTimeout`,
+`inactiveFlowRecordTimeout`, `aggregatorTransportProtocol` parameters are set to
+`60s`, `90s` and `tls` respectively. Please make sure that
+`aggregatorTransportProtocol` and protocol of `flowCollectorAddr` in
+`agent-agent.conf` are set to `tls` to guarantee secure communication works
+properly. Protocol of `flowCollectorAddr` and `aggregatorTransportProtocol` must
+always match, so TLS must either be enabled for both sides or disabled for both
+sides. Please modify the parameters as per your requirements.
+
+Please note that the default value for `recordContents.podLabels` is `false`,
+which indicates source and destination Pod labels will not be included in the
+flow records exported to `flowCollector` and `clickHouse`. If you would like
+to include them, you can modify the value to `true`.
+
+Please note that the default value for `apiServer.apiPort` is `10348`, which
+is the port used to expose the Flow Aggregator's APIServer. Please modify the
+parameters as per your requirements.
+
+Please note that the default value for `clickHouse.commitInterval` is `8s`,
+which is based on experiment results to achieve best ClickHouse write
+performance and data retention. Based on ClickHouse recommendation for best
+performance, this interval is required be no shorter than `1s`. Also note
+that flow aggregator has a cache limit of ~500k records for ClickHouse-Grafana
+collector. If `clickHouse.commitInterval` is set to a value too large, there's
+a risk of losing records.
+
+### IPFIX Information Elements (IEs) in an Aggregated Flow Record
+
+In addition to IPFIX information elements provided in the [above section](#ipfix-information-elements-ies-in-a-flow-record),
+the Flow Aggregator adds the following fields to the flow records.
+
+#### IEs from Antrea IE Registry
+
+| IPFIX Information Element | Field ID | Type | Description |
+|-------------------------------------------|----------|-------------|-------------|
+| packetTotalCountFromSourceNode | 120 | unsigned64 | The cumulative number of packets for this flow as reported by the source Node, since the flow started. |
+| octetTotalCountFromSourceNode | 121 | unsigned64 | The cumulative number of octets for this flow as reported by the source Node, since the flow started. |
+| packetDeltaCountFromSourceNode | 122 | unsigned64 | The number of packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| octetDeltaCountFromSourceNode | 123 | unsigned64 | The number of octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| reversePacketTotalCountFromSourceNode | 124 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the source Node, since the flow started. |
+| reverseOctetTotalCountFromSourceNode | 125 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the source Node, since the flow started. |
+| reversePacketDeltaCountFromSourceNode | 126 | unsigned64 | The number of reverse packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| reverseOctetDeltaCountFromSourceNode | 127 | unsigned64 | The number of reverse octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| packetTotalCountFromDestinationNode | 128 | unsigned64 | The cumulative number of packets for this flow as reported by the destination Node, since the flow started. |
+| octetTotalCountFromDestinationNode | 129 | unsigned64 | The cumulative number of octets for this flow as reported by the destination Node, since the flow started. |
+| packetDeltaCountFromDestinationNode | 130 | unsigned64 | The number of packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| octetDeltaCountFromDestinationNode | 131 | unsigned64 | The number of octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| reversePacketTotalCountFromDestinationNode| 132 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the destination Node, since the flow started. |
+| reverseOctetTotalCountFromDestinationNode | 133 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the destination Node, since the flow started. |
+| reversePacketDeltaCountFromDestinationNode| 134 | unsigned64 | The number of reverse packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| reverseOctetDeltaCountFromDestinationNode | 135 | unsigned64 | The number of reverse octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| sourcePodLabels | 143 | string | |
+| destinationPodLabels | 144 | string | |
+| throughput | 145 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point. The unit is bits per second. |
+| reverseThroughput | 146 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point. The unit is bits per second. |
+| throughputFromSourceNode | 147 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. |
+| throughputFromDestinationNode | 148 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. |
+| reverseThroughputFromSourceNode | 149 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. |
+| reverseThroughputFromDestinationNode | 150 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. |
+| flowEndSecondsFromSourceNode | 151 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the source Node. The unit is seconds. |
+| flowEndSecondsFromDestinationNode | 152 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the destination Node. The unit is seconds. |
+
+### Supported Capabilities
+
+#### Storage of Flow Records
+
+Flow Aggregator stores the received flow records from Antrea Agents in a hash map,
+where the flow key is 5-tuple of a network connection. 5-tuple consists of Source IP,
+Destination IP, Source Port, Destination Port and Transport protocol. Therefore,
+Flow Aggregator maintains one flow record for any given connection, and this flow
+record gets updated till the connection in the Kubernetes cluster becomes invalid.
+
+#### Correlation of Flow Records
+
+In the case of inter-Node flows, there are two flow records, one
+from the source Node, where the flow originates from, and another one from the destination
+Node, where the destination Pod resides. Both the flow records contain incomplete
+information as mentioned [here](#types-of-flows-and-associated-information). Flow
+Aggregator provides support for the correlation of the flow records from the
+source Node and the destination Node, and it exports a single flow record with complete
+information for both inter-Node and intra-Node flows.
+
+#### Aggregation of Flow Records
+
+Flow Aggregator aggregates the flow records that belong to a single connection.
+As part of aggregation, fields such as flow timestamps, flow statistics etc. are
+updated. For the purpose of updating flow statistics fields, Flow Aggregator introduces
+the [new fields](#ies-from-antrea-ie-registry) in Antrea Enterprise IPFIX registry
+corresponding to the Source Node and Destination Node, so that flow statistics from
+different Nodes can be preserved.
+
+### Antctl Support
+
+antctl can access the Flow Aggregator API to dump flow records and print metrics
+about flow record processing. Refer to the
+[antctl documentation](antctl.md#flow-aggregator-commands) for more information.
+
+## Quick Deployment
+
+If you would like to quickly try Network Flow Visibility feature, you can deploy
+Antrea, the Flow Aggregator Service, the Grafana Flow Collector on the
+[Vagrant setup](../test/e2e/README.md).
+
+### Image-building Steps
+
+Build required image under antrea by using make command:
+
+```shell
+make
+make flow-aggregator-image
+```
+
+### Deployment Steps
+
+Given any external IPFIX flow collector, you can deploy Antrea and the Flow
+Aggregator Service on a default Vagrant setup by running the following commands:
+
+```shell
+./infra/vagrant/provision.sh
+./infra/vagrant/push_antrea.sh --flow-collector
+```
+
+If you would like to deploy the Grafana Flow Collector, you can run the following command:
+
+```shell
+./infra/vagrant/provision.sh
+./infra/vagrant/push_antrea.sh --flow-collector Grafana
+```
+
+## Flow Collectors
+
+Here we list two choices the external configured flow collector: go-ipfix collector
+and Grafana flow collector. For each collector, we introduce how to deploy it and
+how to output or visualize the collected flow records information.
+
+### Go-ipfix Collector
+
+#### Deployment Steps
+
+The go-ipfix collector can be built from [go-ipfix library](https://github.com/vmware/go-ipfix).
+It is used to collect, decode and log the IPFIX records.
+
+* To deploy a released version of the go-ipfix collector, please choose one
+deployment manifest from the list of releases (supported after v0.5.2).
+For any given release (e.g. v0.5.2), you can deploy the collector as follows:
+
+```shell
+kubectl apply -f https://github.com/vmware/go-ipfix/releases/download//ipfix-collector.yaml
+```
+
+* To deploy the latest version of the go-ipfix collector (built from the main branch),
+use the checked-in [deployment manifest](https://github.com/vmware/go-ipfix/blob/main/build/yamls/ipfix-collector.yaml):
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/vmware/go-ipfix/main/build/yamls/ipfix-collector.yaml
+```
+
+Go-ipfix collector also supports customization on its parameters: port and protocol.
+Please follow the [go-ipfix documentation](https://github.com/vmware/go-ipfix#readme)
+to configure those parameters if needed.
+
+#### Output Flow Records
+
+To output the flow records collected by the go-ipfix collector, use the command below:
+
+```shell
+kubectl logs -n ipfix
+```
+
+### Grafana Flow Collector (migrated)
+
+**Starting with Antrea v1.8, support for the Grafana Flow Collector has been migrated to Theia.**
+
+The Grafana Flow Collector was added in Antrea v1.6.0. In Antrea v1.7.0, we
+start to move the network observability and analytics functionalities of Antrea
+to [Project Theia](https://github.com/antrea-io/theia), including the Grafana
+Flow Collector. Going forward, further development of the Grafana Flow Collector
+will be in the Theia repo. For the up-to-date version of Grafana Flow Collector
+and other Theia features, please refer to the
+[Theia document](https://github.com/antrea-io/theia/blob/main/docs/network-flow-visibility.md).
+
+### ELK Flow Collector (removed)
+
+**Starting with Antrea v1.7, support for the ELK Flow Collector has been removed.**
+Please consider using the [Grafana Flow Collector](#grafana-flow-collector-migrated)
+instead, which is actively maintained.
+
+## Layer 7 Network Flow Exporter
+
+In addition to layer 4 network visibility, Antrea adds layer 7 network flow
+export.
+
+### Prerequisites
+
+To achieve L7 (Layer 7) network flow export, the `L7FlowExporter` feature gate
+must be enabled.
+
+Note: L7 flow-visibility support for Theia is not yet implemented.
+
+### Usage
+
+To export layer 7 flows of a Pod or a Namespace, user can annotate Pods or
+Namespaces with the annotation key `visibility.antrea.io/l7-export` and set the
+value to indicate the traffic flow direction, which can be `ingress`, `egress`
+or `both`.
+
+For example, to enable L7 flow export in the ingress direction on
+Pod test-pod in the default Namespace, you can use:
+
+```bash
+kubectl annotate pod test-pod visibility.antrea.io/l7-export=ingress
+```
+
+Based on the annotation, Flow Exporter will export the L7 flow data to the
+Flow Aggregator or configured IPFix collector using the fields `appProtocolName`
+and `httpVals`.
+
+* `appProtocolName` field is used to indicate the application layer protocol
+name (e.g. http) and it will be empty if application layer data is not exported.
+* `httpVals` stores a serialized JSON dictionary with every HTTP request for
+a connection mapped to a unique transaction ID. This format lets us group all
+the HTTP transactions pertaining to the same connection, into the same exported
+record.
+
+An example of `httpVals` is :
+
+`"{\"0\":{\"hostname\":\"10.10.0.1\",\"url\":\"/public/\",\"http_user_agent\":\"curl/7.74.0\",\"http_content_type\":\"text/html\",\"http_method\":\"GET\",\"protocol\":\"HTTP/1.1\",\"status\":200,\"length\":153}}"`
+
+HTTP fields in the `httpVals` are:
+
+| Http field | Description |
+|-------------------|--------------------------------------------------------|
+| hostname | IP address of the sender |
+| URL | url requested on the server |
+| http_user_agent | application used for HTTP |
+| http_content_type | type of content being returned by the server |
+| http_method | HTTP method used for the request |
+| protocol | HTTP protocol version used for the request or response |
+| status | HTTP status code |
+| length | size of the response body |
+
+As of now, the only supported layer 7 protocol is `HTTP1.1`. Support for more
+protocols may be added in the future. Antrea supports L7FlowExporter feature only
+on Linux Nodes.
diff --git a/content/docs/v2.2.0-alpha.2/docs/network-requirements.md b/content/docs/v2.2.0-alpha.2/docs/network-requirements.md
new file mode 100644
index 00000000..076b616a
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/network-requirements.md
@@ -0,0 +1,23 @@
+# Network Requirements
+
+Antrea has a few network requirements to get started, ensure that your hosts and
+firewalls allow the necessary traffic based on your configuration.
+
+| Configuration | Host(s) | Protocols/Ports | Other |
+|------------------------------------------------|---------------------------------------|--------------------------------------------|------------------------------|
+| Antrea with VXLAN enabled | All | UDP 4789 | |
+| Antrea with Geneve enabled | All | UDP 6081 | |
+| Antrea with STT enabled | All | TCP 7471 | |
+| Antrea with GRE enabled | All | IP Protocol ID 47 | No support for IPv6 clusters |
+| Antrea with IPsec ESP enabled | All | IP protocol ID 50 and 51, UDP 500 and 4500 | |
+| Antrea with WireGuard enabled | All | UDP 51820 | |
+| Antrea Multi-cluster with WireGuard encryption | Multi-cluster Gateway Node | UDP 51821 | |
+| Antrea with feature BGPPolicy enabled | Selected by user-provided BGPPolicies | TCP 179[1] | |
+| All | Kube-apiserver host | TCP 443 or 6443[2] | |
+| All | All | TCP 10349, 10350, 10351, UDP 10351 | |
+
+[1] _The default value is 179, but a user created BGPPolicy can assign a different
+port number._
+
+[2] _The value is passed to kube-apiserver `--secure-port` flag. You can find the port
+number from the output of `kubectl get svc kubernetes -o yaml`._
diff --git a/content/docs/v2.2.0-alpha.2/docs/node-port-local.md b/content/docs/v2.2.0-alpha.2/docs/node-port-local.md
new file mode 100644
index 00000000..1c855702
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/node-port-local.md
@@ -0,0 +1,224 @@
+# NodePortLocal (NPL)
+
+## Table of Contents
+
+
+- [What is NodePortLocal?](#what-is-nodeportlocal)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [Usage pre Antrea v1.7](#usage-pre-antrea-v17)
+ - [Usage pre Antrea v1.4](#usage-pre-antrea-v14)
+ - [Usage pre Antrea v1.2](#usage-pre-antrea-v12)
+- [Limitations](#limitations)
+- [Integrations with External Load Balancers](#integrations-with-external-load-balancers)
+ - [AVI](#avi)
+
+
+## What is NodePortLocal?
+
+`NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent,
+through which each port of a Service backend Pod can be reached from the
+external network using a port of the Node on which the Pod is running. NPL
+enables better integration with external Load Balancers which can take advantage
+of the feature: instead of relying on NodePort Services implemented by
+kube-proxy, external Load-Balancers can consume NPL port mappings published by
+the Antrea Agent (as K8s Pod annotations) and load-balance Service traffic
+directly to backend Pods.
+
+## Prerequisites
+
+NodePortLocal was introduced in v0.13 as an alpha feature, and was graduated to
+beta in v1.4, at which time it was enabled by default. Prior to v1.4, a feature
+gate, `NodePortLocal`, must be enabled on the antrea-agent for the feature to
+work. Starting from Antrea v1.7, NPL is supported on the Windows antrea-agent.
+From Antrea v1.14, NPL is GA.
+
+## Usage
+
+In addition to enabling the NodePortLocal feature gate (if needed), you need to
+ensure that the `nodePortLocal.enable` flag is set to true in the Antrea Agent
+configuration. The `nodePortLocal.portRange` parameter can also be set to change
+the range from which Node ports will be allocated. Otherwise, the range
+of `61000-62000` will be used by default on Linux, and the range `40000-41000` will
+be used on Windows. When using the NodePortLocal feature, your `antrea-agent` ConfigMap
+should look like this:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ # True by default starting with Antrea v1.4
+ # NodePortLocal: true
+ nodePortLocal:
+ enable: true
+ # Uncomment if you need to change the port range.
+ # portRange: 61000-62000
+```
+
+Pods can be selected for `NodePortLocal` by tagging a Service with annotation:
+`nodeportlocal.antrea.io/enabled: "true"`. Consequently, `NodePortLocal` is
+enabled for all the Pods which are selected by the Service through a selector,
+and the ports of these Pods will be reachable through Node ports allocated from
+the port range. The selected Pods will be annotated with the details about
+allocated Node port(s) for the Pod.
+
+For example, given the following Service and Deployment definitions:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx
+ annotations:
+ nodeportlocal.antrea.io/enabled: "true"
+spec:
+ ports:
+ - name: web
+ port: 80
+ protocol: TCP
+ targetPort: 8080
+ selector:
+ app: nginx
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+```
+
+If the NodePortLocal feature gate is enabled, then all the Pods in the
+Deployment will be annotated with the `nodeportlocal.antrea.io` annotation. The
+value of this annotation is a serialized JSON array. In our example, a given Pod
+in the `nginx` Deployment may look like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx-6799fc88d8-9rx8z
+ labels:
+ app: nginx
+ annotations:
+ nodeportlocal.antrea.io: '[{"podPort":8080,"nodeIP":"10.10.10.10","nodePort":61002,"protocol":"tcp"}]'
+```
+
+This annotation indicates that port 8080 of the Pod can be reached through port
+61002 of the Node with IP Address 10.10.10.10 for TCP traffic.
+
+The `nodeportlocal.antrea.io` annotation is generated and managed by Antrea. It
+is not meant to be created or modified by users directly. A user-provided
+annotation is likely to be overwritten by Antrea, or may lead to unexpected
+behavior.
+
+NodePortLocal can only be used with Services of type `ClusterIP` or
+`LoadBalancer`. The `nodeportlocal.antrea.io` annotation has no effect for
+Services of type `NodePort` or `ExternalName`. The annotation also has no effect
+for Services with an empty or missing Selector.
+
+Starting from Antrea v2.0, the `protocols` field is removed.
+
+### Usage pre Antrea v1.7
+
+Prior to the Antrea v1.7 minor release, the `nodeportlocal.antrea.io` annotation
+could contain multiple members in `protocols`.
+An example may look like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx-6799fc88d8-9rx8z
+ labels:
+ app: nginx
+ annotations:
+ nodeportlocal.antrea.io: '[{"podPort":8080,"nodeIP":"10.10.10.10","nodePort":61002}, "protocols":["tcp","udp"]]'
+```
+
+This annotation indicates that port 8080 of the Pod can be reached through port
+61002 of the Node with IP Address 10.10.10.10 for both TCP and UDP traffic.
+
+Prior to v1.7, the implementation would always allocate the same nodePort value
+for all the protocols exposed for a given podPort.
+Starting with v1.7, there will be multiple annotations for the different protocols
+for a given podPort, and the allocated nodePort may be different for each one.
+
+### Usage pre Antrea v1.4
+
+Prior to the Antrea v1.4 minor release, the `nodePortLocal` option group in the
+Antrea Agent configuration did not exist. To enable the NodePortLocal feature,
+one simply needed to enable the feature gate, and the port range could be
+configured using the (now removed) `nplPortRange` parameter.
+
+### Usage pre Antrea v1.2
+
+Prior to the Antrea v1.2 minor release, the NodePortLocal feature suffered from
+a known [issue](https://github.com/antrea-io/antrea/issues/1912). In order to
+use the feature, the correct list of ports exposed by each container had to be
+provided in the Pod specification (`.spec.containers[*].Ports`). The
+NodePortLocal implementation would then use this information to decide which
+ports to map for each Pod. In the above example, the Deployment definition would
+need to be changed to:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+```
+
+This was error-prone because providing this list of ports is typically optional
+in K8s and omitting it does not prevent ports from being exposed, which means
+that many user may omit this information and expect NPL to work. Starting with
+Antrea v1.2, we instead rely on the `service.spec.ports[*].targetPort`
+information, for each NPL-enabled Service, to determine which ports need to be
+mapped.
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux or Windows
+with IPv4 addresses. Only TCP & UDP Service ports are supported (not SCTP).
+
+## Integrations with External Load Balancers
+
+### AVI
+
+When using AVI and the AVI Kubernetes Operator (AKO), the AKO `serviceType`
+configuration parameter can be set to `NodePortLocal`. After that, annotating
+Services manually with `nodeportlocal.antrea.io` is no longer required. AKO will
+automatically annotate Services of type `LoadBalancer`, along with backend
+ClusterIP Services used by Ingress resources (for which AVI is the Ingress
+class). For more information refer to the [AKO
+documentation](https://avinetworks.com/docs/ako/1.5/handling-objects/).
diff --git a/content/docs/v2.2.0-alpha.2/docs/noencap-hybrid-modes.md b/content/docs/v2.2.0-alpha.2/docs/noencap-hybrid-modes.md
new file mode 100644
index 00000000..df713e28
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/noencap-hybrid-modes.md
@@ -0,0 +1,183 @@
+# NoEncap and Hybrid Traffic Modes of Antrea
+
+Besides the default `Encap` mode, in which Pod traffic across Nodes will be
+encapsulated and sent over tunnels, Antrea also supports `NoEncap` and `Hybrid`
+traffic modes. In `NoEncap` mode, Antrea does not encapsulate Pod traffic, but
+relies on the Node network to route the traffic across Nodes. In `Hybrid` mode,
+Antrea encapsulates Pod traffic when the source Node and the destination Node
+are in different subnets, but does not encapsulate when the source and the
+destination Nodes are in the same subnet. This document describes how to
+configure Antrea with the `NoEncap` and `Hybrid` modes.
+
+The NoEncap and Hybrid traffic modes require Antrea Proxy to support correct
+NetworkPolicy enforcement, which is why trying to disable Antrea Proxy in these
+modes will normally cause the Antrea Agent to fail. It is possible to override
+this behavior and force Antrea Proxy to be disabled by setting the
+ALLOW_NO_ENCAP_WITHOUT_ANTREA_PROXY environment variable to true for the Antrea
+Agent in the [Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea.yml).
+For example:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: antrea-agent
+ labels:
+ component: antrea-agent
+spec:
+ template:
+ spec:
+ containers:
+ - name: antrea-agent
+ env:
+ - name: ALLOW_NO_ENCAP_WITHOUT_ANTREA_PROXY
+ value: "true"
+```
+
+Note that changing the traffic mode in an existing cluster, where Antrea is
+currently installed or was previously installed, may require restarting existing
+workloads. In particular, the choice of traffic mode has an impact on the MTU
+value used for Pod network interfaces. When changing the traffic mode from
+`NoEncap` to `Encap`, existing workloads should be restarted, so that new
+network interfaces with a lower MTU value can be created.
+
+## Hybrid Mode
+
+Let us start from `Hybrid` mode which is simpler to configure. `Hybrid` mode
+does not encapsulate Pod traffic when the source and the destination Nodes are
+in the same subnet. Thus it requires the Node network to allow Pod IP addresses
+to be sent out from the Nodes' NICs. This requirement is not supported in all
+the networks and clouds, or in some cases it might require specific
+configuration of the Node network. For example:
+
+* On AWS, the source/destination checks must be disabled on the EC2 instances of
+the Kubernetes Nodes, as described in the
+[AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck).
+
+* On Google Compute Engine, IP forwarding must be enabled on the VM instances as
+described in the [Google Cloud documentation](https://cloud.google.com/vpc/docs/using-routes#canipforward).
+
+* On Azure, there is no way to let VNet forward unknown IPs, hence Antrea
+`Hybrid` mode cannot work on Azure.
+
+If the Node network does allow Pod IPs sent out from the Nodes, you can
+configure Antrea to run in the `Hybrid` mode by setting the `trafficEncapMode`
+config parameter of `antrea-agent` to `hybrid`. The `trafficEncapMode` config
+parameter is defined in `antrea-agent.conf` of the `antrea` ConfigMap in the
+[Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea.yml).
+
+```yaml
+ antrea-agent.conf: |
+ trafficEncapMode: hybrid
+```
+
+After changing the config parameter, you can deploy Antrea in `Hybrid` mode with
+the usual command:
+
+```bash
+kubectl apply -f antrea.yml
+```
+
+## NoEncap Mode
+
+In `NoEncap` mode, Antrea never encapsulates Pod traffic. Just like `Hybrid`
+mode, the Node network needs to allow Pod IP addresses sent out from Nodes. When
+the Nodes are not in the same subnet, `NoEncap` mode additionally requires the
+Node network be able to route the Pod traffic from the source Node to the
+destination Node. There are two possibilities to enable this routing by Node
+network:
+
+* Leverage Route Controller of [Kubernetes Cloud Controller Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller).
+The Kubernetes Cloud Providers that implement Route Controller can add routes
+to the cloud network routers for the Pod CIDRs of Nodes, and then the cloud
+network is able to route Pod traffic between Nodes. This Route Controller
+functionality is supported by the Cloud Provider implementations of the major
+clouds, including: [AWS](https://github.com/kubernetes/cloud-provider-aws),
+[Azure](https://github.com/kubernetes-sigs/cloud-provider-azure),
+[GCP](https://github.com/kubernetes/cloud-provider-gcp),
+and [vSphere (with NSX-T)](https://github.com/kubernetes/cloud-provider-vsphere).
+
+* Run a routing protocol or even manually configure routers to add routes to
+the Node network routers. For example, Antrea can work with [kube-router](https://www.kube-router.io)
+and leverage kube-router to advertise Pod CIDRs to routers using BGP. Section
+[Using kube-router for BGP](#using-kube-router-for-bgp) describes how to
+configure Antrea and kube-router to work together.
+
+When the Node network can support forwarding and routing of Pod traffic, Antrea
+can be configured to run in the `NoEncap` mode, by setting the `trafficEncapMode`
+config parameter of `antrea-agent` to `noEncap`. By default, Antrea performs SNAT
+(source network address translation) for the outbound connections from a Pod to
+outside of the Pod network, using the Node's IP address as the SNAT IP. In the
+`NoEncap` mode, as the Node network knows about Pod IP addresses, the SNAT by
+Antrea might be unnecessary. In this case, you can disable it by setting the
+`noSNAT` config parameter to `true`. The `trafficEncapMode` and `noSNAT` config
+parameters are defined in `antrea-agent.conf` of the `antrea` ConfigMap in the
+[Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea.yml).
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ trafficEncapMode: noEncap
+ noSNAT: false # Set to true to disable Antrea SNAT for external traffic
+```
+
+After changing the parameters, you can deploy Antrea in `noEncap` mode by applying
+the deployment yaml.
+
+### Using kube-router for BGP
+
+We can run kube-router in advertisement-only mode to advertise Pod CIDRs to the
+peered routers, so the routers can know how to route Pod traffic to the Nodes.
+To deploy kube-router in advertisement-only mode, first download the
+[kube-router DaemonSet template](https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml):
+
+```bash
+curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml
+```
+
+Then edit the yaml file and set the following kube-router arguments:
+
+```yaml
+- "--run-router=true"
+- "--run-firewall=false"
+- "--run-service-proxy=false"
+- "--enable-cni=false"
+- "--enable-ibgp=false"
+- "--enable-overlay=false"
+- "--enable-pod-egress=false"
+- "--peer-router-ips="
+- "--peer-router-asns="
+```
+
+The BGP peers should be configured by specifying the `--peer-router-asns` and
+`--peer-router-ips` parameters. Note, the ASNs and IPs must match the
+configuration on the peered routers. For example:
+
+```yaml
+- "--peer-router-ips=192.168.1.99,192.168.1.100
+- "--peer-router-asns=65000,65000"
+```
+
+Then you can deploy the kube-router DaemonSet with:
+
+```bash
+kubectl apply -f generic-kuberouter-only-advertise-routes.yaml
+```
+
+You can verify that the kube-router Pods are running on the Nodes of your
+Kubernetes cluster by (the cluster in the following example has only two Nodes):
+
+```bash
+$ kubectl -n kube-system get pods -l k8s-app=kube-router
+NAME READY STATUS RESTARTS AGE
+kube-router-rn4xc 1/1 Running 0 1m
+kube-router-vhrf5 1/1 Running 0 1m
+```
+
+Antrea can be deployed either before or after kube-router, with the `NoEncap`
+mode.
diff --git a/content/docs/v2.2.0-alpha.2/docs/octant-plugin-installation.md b/content/docs/v2.2.0-alpha.2/docs/octant-plugin-installation.md
new file mode 100644
index 00000000..bce821ca
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/octant-plugin-installation.md
@@ -0,0 +1,6 @@
+# Octant and antrea-octant-plugin installation
+
+***Octant is no longer maintained and the antrea-octant-plugin has been removed
+ as of Antrea v1.13. Please refer to [#4640](https://github.com/antrea-io/antrea/issues/4640)
+ for more information, and check out the [Antrea web UI](https://github.com/antrea-io/antrea-ui)
+ for an alternative.***
diff --git a/content/docs/v2.2.0-alpha.2/docs/os-issues.md b/content/docs/v2.2.0-alpha.2/docs/os-issues.md
new file mode 100644
index 00000000..78cab439
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/os-issues.md
@@ -0,0 +1,119 @@
+# OS-specific known issues
+
+The following issues were encountered when testing Antrea on different OSes, or
+reported by Antrea users. When possible we try to provide a workaround.
+
+## CoreOS
+
+| Issues |
+| ------ |
+| [#626](https://github.com/antrea-io/antrea/issues/626) |
+
+**CoreOS Container Linux has reached its
+ [end-of-life](https://www.openshift.com/learn/topics/coreos) on May 26, 2020
+ and no longer receives updates. It is recommended to migrate to another
+ Operating System as soon as possible.**
+
+CoreOS uses networkd for network configuration. By default, all interfaces are
+managed by networkd because of the [configuration
+files](https://github.com/coreos/init/tree/master/systemd/network) that ship
+with CoreOS. Unfortunately, that includes the gateway interface created by
+Antrea (`antrea-gw0` by default). Most of the time, this is not an issue, but if
+networkd is restarted for any reason, it will cause the interface to lose its IP
+configuration, and all the routes associated with the interface will be
+deleted. To avoid this issue, we recommend that you create the following
+configuration files:
+
+```text
+# /etc/systemd/network/90-antrea-ovs.network
+[Match]
+# use the correct name for the gateway if you changed the Antrea configuration
+Name=antrea-gw0 ovs-system
+Driver=openvswitch
+
+[Link]
+Unmanaged=yes
+```
+
+```text
+# /etc/systemd/network/90-antrea-veth.network
+# may be redundant with 50-docker-veth.network (name may differ based on CoreOS version), which should not be an issue
+[Match]
+Driver=veth
+
+[Link]
+Unmanaged=yes
+```
+
+```text
+# /etc/systemd/network/90-antrea-tun.network
+[Match]
+Name=genev_sys_* vxlan_sys_* gre_sys stt_sys_*
+
+[Link]
+Unmanaged=yes
+```
+
+Note that this fix requires a version of CoreOS `>= 1262.0.0` (Dec 2016), as the
+networkd `Unmanaged` option was not supported before that.
+
+## Photon OS 3.0
+
+| Issues |
+| ------ |
+| [#591](https://github.com/antrea-io/antrea/issues/591) |
+| [#1516](https://github.com/antrea-io/antrea/issues/1516) |
+
+If your K8s Nodes are running Photon OS 3.0, you may see error messages in the
+antrea-agent logs like this one: `"Received bundle error msg: [...]"`. These
+messages indicate that some flow entries could not be added to the OVS
+bridge. This usually indicates that the Kernel was not compiled with the
+`CONFIG_NF_CONNTRACK_ZONES` option, as this option was only enabled recently in
+Photon OS. This option is required by the Antrea OVS datapath. To confirm that
+this is indeed the issue, you can run the following command on one of your
+Nodes:
+
+```bash
+grep CONFIG_NF_CONNTRACK_ZONES= /boot/config-`uname -r`
+```
+
+If you do *not* see the following output, then it confirms that your Kernel is
+indeed missing this option:
+
+```text
+CONFIG_NF_CONNTRACK_ZONES=y
+```
+
+To fix this issue and be able to run Antrea on your Photon OS Nodes, you will
+need to upgrade to a more recent version: `>= 4.19.87-4` (Jan 2020). You can
+achieve this by running `tdnf upgrade linux-esx` on all your Nodes.
+
+After this fix, all the Antrea Agents should be running correctly. If you still
+experience connectivity issues, it may be because of Photon's default firewall
+rules, which are quite strict by
+[default](https://vmware.github.io/photon/assets/files/html/3.0/photon_admin/default-firewall-settings.html). The
+easiest workaround is to accept all traffic on the gateway interface created by
+Antrea (`antrea-gw0` by default), which enables traffic to flow between the Node
+and the Pod network:
+
+```bash
+iptables -A INPUT -i antrea-gw0 -j ACCEPT
+```
+
+### Pod Traffic Shaping
+
+Antrea provides support for Pod [Traffic Shaping](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping)
+by leveraging the open-source [bandwidth plugin](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
+maintained by the CNI project. This plugin requires the following Kernel
+modules: `ifb`, `sch_tbf` and `sch_ingress`. It seems that at the moment Photon
+OS 3.0 is built without the `ifb` Kernel module, which you can confirm by
+running `modprobe --dry-run ifb`: an error would indicate that the module is
+indeed missing. Without this module, Pods with the
+`kubernetes.io/egress-bandwidth` annotation cannot be created successfully. Pods
+with no traffic shaping annotation, or which only use the
+`kubernetes.io/ingress-bandwidth` annotation, can still be created successfully
+as they do not require the creation of an `ifb` device.
+
+If Photon OS is patched to enable `ifb`, we will update this documentation to
+reflect this change, and include information about which Photon OS version can
+support egress traffic shaping.
diff --git a/content/docs/v2.2.0-alpha.2/docs/ovs-offload.md b/content/docs/v2.2.0-alpha.2/docs/ovs-offload.md
new file mode 100644
index 00000000..51eb6336
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/ovs-offload.md
@@ -0,0 +1,211 @@
+# OVS Hardware Offload
+
+The OVS software based solution is CPU intensive, affecting system performance
+and preventing fully utilizing available bandwidth. OVS 2.8 and above support
+a feature called OVS Hardware Offload which improves performance significantly.
+This feature allows offloading the OVS data-plane to the NIC while maintaining
+OVS control-plane unmodified. It is using SR-IOV technology with VF representor
+host net-device. The VF representor plays the same role as TAP devices
+in Para-Virtual (PV) setup. A packet sent through the VF representor on the host
+arrives to the VF, and a packet sent through the VF is received by its representor.
+
+## Supported Ethernet controllers
+
+The following manufacturers are known to work:
+
+- Mellanox ConnectX-5 and above
+
+## Prerequisites
+
+- Antrea v0.9.0 or greater
+- Linux Kernel 5.7 or greater
+- iproute 4.12 or greater
+
+## Instructions for Mellanox ConnectX-5 and Above
+
+In order to enable Open vSwitch hardware offload, the following steps
+are required. Please make sure you have root privileges to run the commands
+below.
+
+Check the Number of VF Supported on the NIC
+
+```bash
+cat /sys/class/net/enp3s0f0/device/sriov_totalvfs
+8
+```
+
+Create the VFs
+
+```bash
+echo '4' > /sys/class/net/enp3s0f0/device/sriov_numvfs
+```
+
+Verify that the VFs are created
+
+```bash
+ip link show enp3s0f0
+8: enp3s0f0: mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
+ link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
+ vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+```
+
+Set up the PF to be up
+
+```bash
+ip link set enp3s0f0 up
+```
+
+Unbind the VFs from the driver
+
+```bash
+echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind
+```
+
+Configure SR-IOV VFs to switchdev mode
+
+```bash
+devlink dev eswitch set pci/0000:03:00.0 mode switchdev
+ethtool -K enp3s0f0 hw-tc-offload on
+```
+
+Bind the VFs to the driver
+
+```bash
+echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/bind
+```
+
+## SR-IOV network device plugin configuration
+
+Create a ConfigMap that defines SR-IOV resource pool configuration
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: sriovdp-config
+ namespace: kube-system
+data:
+ config.json: |
+ {
+ "resourceList": [{
+ "resourcePrefix": "mellanox.com",
+ "resourceName": "cx5_sriov_switchdev",
+ "isRdma": true,
+ "selectors": {
+ "vendors": ["15b3"],
+ "devices": ["1018"],
+ "drivers": ["mlx5_core"]
+ }
+ }
+ ]
+ }
+```
+
+Deploy SR-IOV network device plugin as DaemonSet. See .
+
+Deploy multus CNI as DaemonSet. See .
+
+Create NetworkAttachementDefinition CRD with Antrea CNI config.
+
+```yaml
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: default
+ namespace: kube-system
+ annotations:
+ k8s.v1.cni.cncf.io/resourceName: mellanox.com/cx5_sriov_switchdev
+spec:
+ config: '{
+ "cniVersion": "0.3.1",
+ "name": "antrea",
+ "plugins": [ { "type": "antrea", "ipam": { "type": "host-local" } }, { "type": "portmap", "capabilities": {"portMappings": true} }, { "type": "bandwidth", "capabilities": {"bandwidth": true} }]
+}'
+
+```
+
+## Deploy Antrea Image with hw-offload enabled
+
+Modify the build/yamls/antrea.yml with offload flag
+
+```yaml
+ - command:
+ - start_ovs
+ - --hw-offload
+```
+
+## Deploy POD with OVS hardware-offload
+
+Create POD spec and request a VF
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: ovs-offload-pod1
+ annotations:
+ v1.multus-cni.io/default-network: default
+spec:
+ containers:
+ - name: ovs-offload-app
+ image: networkstatic/iperf3
+ command:
+ - sh
+ - -c
+ - |
+ sleep 1000000
+ resources:
+ requests:
+ mellanox.com/cx5_sriov_switchdev: '1'
+ limits:
+ mellanox.com/cx5_sriov_switchdev: '1'
+```
+
+## Verify Hardware-Offloads is Working
+
+Run iperf3 server on POD 1
+
+```bash
+kubectl exec -it ovs-offload-pod1 -- iperf3 -s
+```
+
+Run iperf3 client on POD 2
+
+```bash
+kubectl exec -it ovs-offload-pod2 -- iperf3 -c 192.168.1.17 -t 100
+```
+
+Check traffic on the VF representor port. Verify only TCP connection establishment appears
+
+```text
+tcpdump -i mofed-te-b5583b tcp
+listening on mofed-te-b5583b, link-type EN10MB (Ethernet), capture size 262144 bytes
+22:24:44.969516 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [S], seq 89800743, win 64860, options [mss 1410,sackOK,TS val 491087056 ecr 0,nop,wscale 7], length 0
+22:24:44.969773 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43558: Flags [S.], seq 1312764151, ack 89800744, win 64308, options [mss 1410,sackOK,TS val 4095895608 ecr 491087056,nop,wscale 7], length 0
+22:24:45.085558 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 0
+22:24:45.085592 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 37
+22:24:45.086311 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [S], seq 3802331506, win 64860, options [mss 1410,sackOK,TS val 491087279 ecr 0,nop,wscale 7], length 0
+22:24:45.086462 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [S.], seq 441940709, ack 3802331507, win 64308, options [mss 1410,sackOK,TS val 4095895725 ecr 491087279,nop,wscale 7], length 0
+22:24:45.086624 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 0
+22:24:45.086654 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 37
+22:24:45.086715 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [.], ack 38, win 503, options [nop,nop,TS val 4095895725 ecr 491087279], length 0
+```
+
+Check datapath rules are offloaded
+
+```text
+ovs-appctl dpctl/dump-flows --names type=offloaded
+recirc_id(0),in_port(eth0),eth(src=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(src=192.168.1.17,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:ct(zone=65520),recirc(0x18)
+ct_state(+est+trk),ct_mark(0),recirc_id(0x18),in_port(eth0),eth(dst=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:eth1
+recirc_id(0),in_port(eth1),eth(src=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(src=192.168.1.16,frag=no), packets:133410141, bytes:195255745684, used:0.550s, actions:ct(zone=65520),recirc(0x16)
+ct_state(+est+trk),ct_mark(0),recirc_id(0x16),in_port(eth1),eth(dst=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:133410138, bytes:195255745483, used:0.550s, actions:eth0
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/prometheus-integration.md b/content/docs/v2.2.0-alpha.2/docs/prometheus-integration.md
new file mode 100644
index 00000000..776d9c00
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/prometheus-integration.md
@@ -0,0 +1,776 @@
+# Prometheus Integration
+
+## Purpose
+
+Prometheus server can monitor various metrics and provide an observation of the
+Antrea Controller and Agent components. The doc provides general guidelines to
+the configuration of Prometheus server to operate with the Antrea components.
+
+## About Prometheus
+
+[Prometheus](https://prometheus.io/) is an open source monitoring and alerting
+server. Prometheus is capable of collecting metrics from various Kubernetes
+components, storing and providing alerts.
+Prometheus can provide visibility by integrating with other products such as
+[Grafana](https://grafana.com/).
+
+One of Prometheus capabilities is self-discovery of Kubernetes services which
+expose their metrics. So Prometheus can scrape the metrics of any additional
+components which are added to the cluster without further configuration changes.
+
+## Antrea Configuration
+
+Enable Prometheus metrics listener by setting `enablePrometheusMetrics`
+parameter to true in the Controller and the Agent configurations.
+
+## Prometheus Configuration
+
+### Prometheus version
+
+Prometheus integration with Antrea is validated as part of CI using Prometheus v2.46.0.
+
+### Prometheus RBAC
+
+Prometheus requires access to Kubernetes API resources for the service discovery
+capability. Reading metrics also requires access to the "/metrics" API
+endpoints.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: prometheus
+rules:
+- apiGroups: [""]
+ resources:
+ - nodes
+ - nodes/proxy
+ - services
+ - endpoints
+ - pods
+ verbs: ["get", "list", "watch"]
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs: ["get", "list", "watch"]
+- nonResourceURLs: ["/metrics"]
+ verbs: ["get"]
+```
+
+### Antrea Metrics Listener Access
+
+To scrape the metrics from Antrea Controller and Agent, Prometheus needs the
+following permissions
+
+```yaml
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: prometheus-antrea
+rules:
+- nonResourceURLs:
+ - /metrics
+ verbs:
+ - get
+```
+
+### Antrea Components Scraping configuration
+
+Add the following jobs to Prometheus scraping configuration to enable metrics
+collection from Antrea components. Antrea Agent metrics endpoint is exposed through
+Antrea apiserver on `apiport` config parameter given in `antrea-agent.conf` (default
+value is 10350). Antrea Controller metrics endpoint is exposed through Antrea apiserver
+on `apiport` config parameter given in `antrea-controller.conf` (default value is 10349).
+
+#### Controller Scraping
+
+```yaml
+- job_name: 'antrea-controllers'
+kubernetes_sd_configs:
+- role: endpoints
+scheme: https
+tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ insecure_skip_verify: true
+bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_container_name]
+ action: keep
+ regex: kube-system;antrea-controller
+- source_labels: [__meta_kubernetes_pod_node_name, __meta_kubernetes_pod_name]
+ target_label: instance
+```
+
+#### Agent Scraping
+
+```yaml
+- job_name: 'antrea-agents'
+kubernetes_sd_configs:
+- role: pod
+scheme: https
+tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ insecure_skip_verify: true
+bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_container_name]
+ action: keep
+ regex: kube-system;antrea-agent
+- source_labels: [__meta_kubernetes_pod_node_name, __meta_kubernetes_pod_name]
+ target_label: instance
+```
+
+For further reference see the enclosed
+[configuration file](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-prometheus.yml).
+
+The configuration file above can be used to deploy Prometheus Server with
+scraping configuration for Antrea services.
+To deploy this configuration use
+`kubectl apply -f build/yamls/antrea-prometheus.yml`
+
+## Antrea Prometheus Metrics
+
+Antrea Controller and Agents expose various metrics, some of which are provided
+by the Antrea components and others which are provided by 3rd party components
+used by the Antrea components.
+
+Below is a list of metrics, provided by the components and by 3rd parties.
+
+### Antrea Metrics
+
+#### Antrea Agent Metrics
+
+- **antrea_agent_conntrack_antrea_connection_count:** Number of connections
+in the Antrea ZoneID of the conntrack table. This metric gets updated at
+an interval specified by flowPollInterval, a configuration parameter for
+the Agent.
+- **antrea_agent_conntrack_max_connection_count:** Size of the conntrack
+table. This metric gets updated at an interval specified by flowPollInterval,
+a configuration parameter for the Agent.
+- **antrea_agent_conntrack_total_connection_count:** Number of connections
+in the conntrack table. This metric gets updated at an interval specified
+by flowPollInterval, a configuration parameter for the Agent.
+- **antrea_agent_denied_connection_count:** Number of denied connections
+detected by Flow Exporter deny connections tracking. This metric gets updated
+when a flow is rejected/dropped by network policy.
+- **antrea_agent_egress_networkpolicy_rule_count:** Number of egress
+NetworkPolicy rules on local Node which are managed by the Antrea Agent.
+- **antrea_agent_flow_collector_reconnection_count:** Number of re-connections
+between Flow Exporter and flow collector. This metric gets updated whenever
+the connection is re-established between the Flow Exporter and the flow
+collector (e.g. the Flow Aggregator).
+- **antrea_agent_ingress_networkpolicy_rule_count:** Number of ingress
+NetworkPolicy rules on local Node which are managed by the Antrea Agent.
+- **antrea_agent_local_pod_count:** Number of Pods on local Node which are
+managed by the Antrea Agent.
+- **antrea_agent_networkpolicy_count:** Number of NetworkPolicies on local
+Node which are managed by the Antrea Agent.
+- **antrea_agent_ovs_flow_count:** Flow count for each OVS flow table. The
+TableID and TableName are used as labels.
+- **antrea_agent_ovs_flow_ops_count:** Number of OVS flow operations,
+partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_flow_ops_error_count:** Number of OVS flow operation
+errors, partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_flow_ops_latency_milliseconds:** The latency of OVS
+flow operations, partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_meter_packet_dropped_count:** Number of packets dropped by
+OVS meter. The value is greater than 0 when the packets exceed the rate-limit.
+- **antrea_agent_ovs_total_flow_count:** Total flow count of all OVS flow
+tables.
+
+#### Antrea Controller Metrics
+
+- **antrea_controller_acnp_status_updates:** The total number of actual
+status updates performed for Antrea ClusterNetworkPolicy Custom Resources
+- **antrea_controller_address_group_processed:** The total number of
+address-group processed
+- **antrea_controller_address_group_sync_duration_milliseconds:** The duration
+of syncing address-group
+- **antrea_controller_annp_status_updates:** The total number of actual
+status updates performed for Antrea NetworkPolicy Custom Resources
+- **antrea_controller_applied_to_group_processed:** The total number of
+applied-to-group processed
+- **antrea_controller_applied_to_group_sync_duration_milliseconds:** The
+duration of syncing applied-to-group
+- **antrea_controller_length_address_group_queue:** The length of
+AddressGroupQueue
+- **antrea_controller_length_applied_to_group_queue:** The length of
+AppliedToGroupQueue
+- **antrea_controller_length_network_policy_queue:** The length of
+InternalNetworkPolicyQueue
+- **antrea_controller_network_policy_processed:** The total number of
+internal-networkpolicy processed
+- **antrea_controller_network_policy_sync_duration_milliseconds:** The
+duration of syncing internal-networkpolicy
+
+#### Antrea Proxy Metrics
+
+- **antrea_proxy_sync_proxy_rules_duration_seconds:** SyncProxyRules duration
+of Antrea Proxy in seconds
+- **antrea_proxy_total_endpoints_installed:** The number of Endpoints
+installed by Antrea Proxy
+- **antrea_proxy_total_endpoints_updates:** The cumulative number of Endpoint
+updates received by Antrea Proxy
+- **antrea_proxy_total_services_installed:** The number of Services installed
+by Antrea Proxy
+- **antrea_proxy_total_services_updates:** The cumulative number of Service
+updates received by Antrea Proxy
+
+### Common Metrics Provided by Infrastructure
+
+#### Aggregator Metrics
+
+- **aggregator_discovery_aggregation_count_total:** Counter of number of
+times discovery was aggregated
+
+#### Apiserver Metrics
+
+- **apiserver_audit_event_total:** Counter of audit events generated and
+sent to the audit backend.
+- **apiserver_audit_requests_rejected_total:** Counter of apiserver requests
+rejected due to an error in audit logging backend.
+- **apiserver_client_certificate_expiration_seconds:** Distribution of the
+remaining lifetime on the certificate used to authenticate a request.
+- **apiserver_current_inflight_requests:** Maximal number of currently used
+inflight request limit of this apiserver per request kind in last second.
+- **apiserver_delegated_authn_request_duration_seconds:** Request latency
+in seconds. Broken down by status code.
+- **apiserver_delegated_authn_request_total:** Number of HTTP requests
+partitioned by status code.
+- **apiserver_delegated_authz_request_duration_seconds:** Request latency
+in seconds. Broken down by status code.
+- **apiserver_delegated_authz_request_total:** Number of HTTP requests
+partitioned by status code.
+- **apiserver_envelope_encryption_dek_cache_fill_percent:** Percent of the
+cache slots currently occupied by cached DEKs.
+- **apiserver_flowcontrol_read_vs_write_current_requests:** EXPERIMENTAL:
+Observations, at the end of every nanosecond, of the number of requests
+(as a fraction of the relevant limit) waiting or in regular stage of execution
+- **apiserver_flowcontrol_seat_fair_frac:** Fair fraction of server's
+concurrency to allocate to each priority level that can use it
+- **apiserver_longrunning_requests:** Gauge of all active long-running
+apiserver requests broken out by verb, group, version, resource, scope and
+component. Not all requests are tracked this way.
+- **apiserver_request_duration_seconds:** Response latency distribution in
+seconds for each verb, dry run value, group, version, resource, subresource,
+scope and component.
+- **apiserver_request_filter_duration_seconds:** Request filter latency
+distribution in seconds, for each filter type
+- **apiserver_request_sli_duration_seconds:** Response latency distribution
+(not counting webhook duration and priority & fairness queue wait times)
+in seconds for each verb, group, version, resource, subresource, scope
+and component.
+- **apiserver_request_slo_duration_seconds:** Response latency distribution
+(not counting webhook duration and priority & fairness queue wait times)
+in seconds for each verb, group, version, resource, subresource, scope
+and component.
+- **apiserver_request_total:** Counter of apiserver requests broken out
+for each verb, dry run value, group, version, resource, scope, component,
+and HTTP response code.
+- **apiserver_response_sizes:** Response size distribution in bytes for each
+group, version, verb, resource, subresource, scope and component.
+- **apiserver_storage_data_key_generation_duration_seconds:** Latencies in
+seconds of data encryption key(DEK) generation operations.
+- **apiserver_storage_data_key_generation_failures_total:** Total number of
+failed data encryption key(DEK) generation operations.
+- **apiserver_storage_envelope_transformation_cache_misses_total:** Total
+number of cache misses while accessing key decryption key(KEK).
+- **apiserver_tls_handshake_errors_total:** Number of requests dropped with
+'TLS handshake error from' error
+- **apiserver_watch_events_sizes:** Watch event size distribution in bytes
+- **apiserver_watch_events_total:** Number of events sent in watch clients
+- **apiserver_webhooks_x509_insecure_sha1_total:** Counts the number of
+requests to servers with insecure SHA1 signatures in their serving certificate
+OR the number of connection failures due to the insecure SHA1 signatures
+(either/or, based on the runtime environment)
+- **apiserver_webhooks_x509_missing_san_total:** Counts the number of requests
+to servers missing SAN extension in their serving certificate OR the number
+of connection failures due to the lack of x509 certificate SAN extension
+missing (either/or, based on the runtime environment)
+
+#### Authenticated Metrics
+
+- **authenticated_user_requests:** Counter of authenticated requests broken
+out by username.
+
+#### Authentication Metrics
+
+- **authentication_attempts:** Counter of authenticated attempts.
+- **authentication_duration_seconds:** Authentication duration in seconds
+broken out by result.
+- **authentication_token_cache_active_fetch_count:**
+- **authentication_token_cache_fetch_total:**
+- **authentication_token_cache_request_duration_seconds:**
+- **authentication_token_cache_request_total:**
+
+#### Authorization Metrics
+
+- **authorization_attempts_total:** Counter of authorization attempts broken
+down by result. It can be either 'allowed', 'denied', 'no-opinion' or 'error'.
+- **authorization_duration_seconds:** Authorization duration in seconds
+broken out by result.
+
+#### Cardinality Metrics
+
+- **cardinality_enforcement_unexpected_categorizations_total:** The count
+of unexpected categorizations during cardinality enforcement.
+
+#### Disabled Metrics
+
+- **disabled_metrics_total:** The count of disabled metrics.
+
+#### Field Metrics
+
+- **field_validation_request_duration_seconds:** Response latency distribution
+in seconds for each field validation value
+
+#### Go Metrics
+
+- **go_cgo_go_to_c_calls_calls_total:** Count of calls made from Go to C by
+the current process. Sourced from /cgo/go-to-c-calls:calls
+- **go_cpu_classes_gc_mark_assist_cpu_seconds_total:** Estimated total CPU
+time goroutines spent performing GC tasks to assist the GC and prevent it
+from falling behind the application. This metric is an overestimate, and not
+directly comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sourced from /cpu/classes/gc/mark/assist:cpu-seconds
+- **go_cpu_classes_gc_mark_dedicated_cpu_seconds_total:** Estimated total
+CPU time spent performing GC tasks on processors (as defined by GOMAXPROCS)
+dedicated to those tasks. This metric is an overestimate, and not directly
+comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sourced from /cpu/classes/gc/mark/dedicated:cpu-seconds
+- **go_cpu_classes_gc_mark_idle_cpu_seconds_total:** Estimated total CPU
+time spent performing GC tasks on spare CPU resources that the Go scheduler
+could not otherwise find a use for. This should be subtracted from the
+total GC CPU time to obtain a measure of compulsory GC CPU time. This
+metric is an overestimate, and not directly comparable to system CPU time
+measurements. Compare only with other /cpu/classes metrics. Sourced from
+/cpu/classes/gc/mark/idle:cpu-seconds
+- **go_cpu_classes_gc_pause_cpu_seconds_total:** Estimated total CPU time
+spent with the application paused by the GC. Even if only one thread is
+running during the pause, this is computed as GOMAXPROCS times the pause
+latency because nothing else can be executing. This is the exact sum of
+samples in /sched/pauses/total/gc:seconds if each sample is multiplied by
+GOMAXPROCS at the time it is taken. This metric is an overestimate, and not
+directly comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sourced from /cpu/classes/gc/pause:cpu-seconds
+- **go_cpu_classes_gc_total_cpu_seconds_total:** Estimated total CPU
+time spent performing GC tasks. This metric is an overestimate, and not
+directly comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sum of all metrics in /cpu/classes/gc. Sourced from
+/cpu/classes/gc/total:cpu-seconds
+- **go_cpu_classes_idle_cpu_seconds_total:** Estimated total available CPU
+time not spent executing any Go or Go runtime code. In other words, the part of
+/cpu/classes/total:cpu-seconds that was unused. This metric is an overestimate,
+and not directly comparable to system CPU time measurements. Compare only
+with other /cpu/classes metrics. Sourced from /cpu/classes/idle:cpu-seconds
+- **go_cpu_classes_scavenge_assist_cpu_seconds_total:** Estimated total CPU
+time spent returning unused memory to the underlying platform in response
+eagerly in response to memory pressure. This metric is an overestimate, and not
+directly comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sourced from /cpu/classes/scavenge/assist:cpu-seconds
+- **go_cpu_classes_scavenge_background_cpu_seconds_total:** Estimated total
+CPU time spent performing background tasks to return unused memory to the
+underlying platform. This metric is an overestimate, and not directly
+comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sourced from /cpu/classes/scavenge/background:cpu-seconds
+- **go_cpu_classes_scavenge_total_cpu_seconds_total:** Estimated total CPU
+time spent performing tasks that return unused memory to the underlying
+platform. This metric is an overestimate, and not directly comparable
+to system CPU time measurements. Compare only with other /cpu/classes
+metrics. Sum of all metrics in /cpu/classes/scavenge. Sourced from
+/cpu/classes/scavenge/total:cpu-seconds
+- **go_cpu_classes_total_cpu_seconds_total:** Estimated total available CPU
+time for user Go code or the Go runtime, as defined by GOMAXPROCS. In other
+words, GOMAXPROCS integrated over the wall-clock duration this process has been
+executing for. This metric is an overestimate, and not directly comparable to
+system CPU time measurements. Compare only with other /cpu/classes metrics. Sum
+of all metrics in /cpu/classes. Sourced from /cpu/classes/total:cpu-seconds
+- **go_cpu_classes_user_cpu_seconds_total:** Estimated total CPU time spent
+running user Go code. This may also include some small amount of time spent in
+the Go runtime. This metric is an overestimate, and not directly comparable
+to system CPU time measurements. Compare only with other /cpu/classes
+metrics. Sourced from /cpu/classes/user:cpu-seconds
+- **go_gc_cycles_automatic_gc_cycles_total:** Count of completed GC cycles
+generated by the Go runtime. Sourced from /gc/cycles/automatic:gc-cycles
+- **go_gc_cycles_forced_gc_cycles_total:** Count of completed GC cycles
+forced by the application. Sourced from /gc/cycles/forced:gc-cycles
+- **go_gc_cycles_total_gc_cycles_total:** Count of all completed GC
+cycles. Sourced from /gc/cycles/total:gc-cycles
+- **go_gc_duration_seconds:** A summary of the wall-time pause (stop-the-world)
+duration in garbage collection cycles.
+- **go_gc_gogc_percent:** Heap size target percentage configured by the
+user, otherwise 100. This value is set by the GOGC environment variable,
+and the runtime/debug.SetGCPercent function. Sourced from /gc/gogc:percent
+- **go_gc_gomemlimit_bytes:** Go runtime memory limit configured by the user,
+otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment
+variable, and the runtime/debug.SetMemoryLimit function. Sourced from
+/gc/gomemlimit:bytes
+- **go_gc_heap_allocs_by_size_bytes:** Distribution of heap allocations
+by approximate size. Bucket counts increase monotonically. Note that this
+does not include tiny objects as defined by /gc/heap/tiny/allocs:objects,
+only tiny blocks. Sourced from /gc/heap/allocs-by-size:bytes
+- **go_gc_heap_allocs_bytes_total:** Cumulative sum of memory allocated to
+the heap by the application. Sourced from /gc/heap/allocs:bytes
+- **go_gc_heap_allocs_objects_total:** Cumulative count of heap allocations
+triggered by the application. Note that this does not include tiny objects
+as defined by /gc/heap/tiny/allocs:objects, only tiny blocks. Sourced from
+/gc/heap/allocs:objects
+- **go_gc_heap_frees_by_size_bytes:** Distribution of freed heap allocations
+by approximate size. Bucket counts increase monotonically. Note that this
+does not include tiny objects as defined by /gc/heap/tiny/allocs:objects,
+only tiny blocks. Sourced from /gc/heap/frees-by-size:bytes
+- **go_gc_heap_frees_bytes_total:** Cumulative sum of heap memory freed by
+the garbage collector. Sourced from /gc/heap/frees:bytes
+- **go_gc_heap_frees_objects_total:** Cumulative count of heap allocations
+whose storage was freed by the garbage collector. Note that this does not
+include tiny objects as defined by /gc/heap/tiny/allocs:objects, only tiny
+blocks. Sourced from /gc/heap/frees:objects
+- **go_gc_heap_goal_bytes:** Heap size target for the end of the GC
+cycle. Sourced from /gc/heap/goal:bytes
+- **go_gc_heap_live_bytes:** Heap memory occupied by live objects that were
+marked by the previous GC. Sourced from /gc/heap/live:bytes
+- **go_gc_heap_objects_objects:** Number of objects, live or unswept,
+occupying heap memory. Sourced from /gc/heap/objects:objects
+- **go_gc_heap_tiny_allocs_objects_total:** Count of small allocations that
+are packed together into blocks. These allocations are counted separately
+from other allocations because each individual allocation is not tracked
+by the runtime, only their block. Each block is already accounted for in
+allocs-by-size and frees-by-size. Sourced from /gc/heap/tiny/allocs:objects
+- **go_gc_limiter_last_enabled_gc_cycle:** GC cycle the last time the GC CPU
+limiter was enabled. This metric is useful for diagnosing the root cause
+of an out-of-memory error, because the limiter trades memory for CPU time
+when the GC's CPU time gets too high. This is most likely to occur with use
+of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates
+that it was never enabled. Sourced from /gc/limiter/last-enabled:gc-cycle
+- **go_gc_pauses_seconds:** Deprecated. Prefer the identical
+/sched/pauses/total/gc:seconds. Sourced from /gc/pauses:seconds
+- **go_gc_scan_globals_bytes:** The total amount of global variable space
+that is scannable. Sourced from /gc/scan/globals:bytes
+- **go_gc_scan_heap_bytes:** The total amount of heap space that is
+scannable. Sourced from /gc/scan/heap:bytes
+- **go_gc_scan_stack_bytes:** The number of bytes of stack that were scanned
+last GC cycle. Sourced from /gc/scan/stack:bytes
+- **go_gc_scan_total_bytes:** The total amount space that is scannable. Sum
+of all metrics in /gc/scan. Sourced from /gc/scan/total:bytes
+- **go_gc_stack_starting_size_bytes:** The stack size of new
+goroutines. Sourced from /gc/stack/starting-size:bytes
+- **go_godebug_non_default_behavior_asynctimerchan_events_total:**
+The number of non-default behaviors executed by the time package due
+to a non-default GODEBUG=asynctimerchan=... setting. Sourced from
+/godebug/non-default-behavior/asynctimerchan:events
+- **go_godebug_non_default_behavior_execerrdot_events_total:**
+The number of non-default behaviors executed by the os/exec package
+due to a non-default GODEBUG=execerrdot=... setting. Sourced from
+/godebug/non-default-behavior/execerrdot:events
+- **go_godebug_non_default_behavior_gocachehash_events_total:**
+The number of non-default behaviors executed by the cmd/go package
+due to a non-default GODEBUG=gocachehash=... setting. Sourced from
+/godebug/non-default-behavior/gocachehash:events
+- **go_godebug_non_default_behavior_gocachetest_events_total:**
+The number of non-default behaviors executed by the cmd/go package
+due to a non-default GODEBUG=gocachetest=... setting. Sourced from
+/godebug/non-default-behavior/gocachetest:events
+- **go_godebug_non_default_behavior_gocacheverify_events_total:**
+The number of non-default behaviors executed by the cmd/go package
+due to a non-default GODEBUG=gocacheverify=... setting. Sourced from
+/godebug/non-default-behavior/gocacheverify:events
+- **go_godebug_non_default_behavior_gotypesalias_events_total:**
+The number of non-default behaviors executed by the go/types package
+due to a non-default GODEBUG=gotypesalias=... setting. Sourced from
+/godebug/non-default-behavior/gotypesalias:events
+- **go_godebug_non_default_behavior_http2client_events_total:**
+The number of non-default behaviors executed by the net/http package
+due to a non-default GODEBUG=http2client=... setting. Sourced from
+/godebug/non-default-behavior/http2client:events
+- **go_godebug_non_default_behavior_http2server_events_total:**
+The number of non-default behaviors executed by the net/http package
+due to a non-default GODEBUG=http2server=... setting. Sourced from
+/godebug/non-default-behavior/http2server:events
+- **go_godebug_non_default_behavior_httplaxcontentlength_events_total:**
+The number of non-default behaviors executed by the net/http package due
+to a non-default GODEBUG=httplaxcontentlength=... setting. Sourced from
+/godebug/non-default-behavior/httplaxcontentlength:events
+- **go_godebug_non_default_behavior_httpmuxgo121_events_total:**
+The number of non-default behaviors executed by the net/http package
+due to a non-default GODEBUG=httpmuxgo121=... setting. Sourced from
+/godebug/non-default-behavior/httpmuxgo121:events
+- **go_godebug_non_default_behavior_httpservecontentkeepheaders_events_total:**
+The number of non-default behaviors executed by the net/http package due to
+a non-default GODEBUG=httpservecontentkeepheaders=... setting. Sourced from
+/godebug/non-default-behavior/httpservecontentkeepheaders:events
+- **go_godebug_non_default_behavior_installgoroot_events_total:**
+The number of non-default behaviors executed by the go/build package
+due to a non-default GODEBUG=installgoroot=... setting. Sourced from
+/godebug/non-default-behavior/installgoroot:events
+- **go_godebug_non_default_behavior_multipartmaxheaders_events_total:**
+The number of non-default behaviors executed by the mime/multipart package
+due to a non-default GODEBUG=multipartmaxheaders=... setting. Sourced from
+/godebug/non-default-behavior/multipartmaxheaders:events
+- **go_godebug_non_default_behavior_multipartmaxparts_events_total:**
+The number of non-default behaviors executed by the mime/multipart package
+due to a non-default GODEBUG=multipartmaxparts=... setting. Sourced from
+/godebug/non-default-behavior/multipartmaxparts:events
+- **go_godebug_non_default_behavior_multipathtcp_events_total:**
+The number of non-default behaviors executed by the net package
+due to a non-default GODEBUG=multipathtcp=... setting. Sourced from
+/godebug/non-default-behavior/multipathtcp:events
+- **go_godebug_non_default_behavior_netedns0_events_total:**
+The number of non-default behaviors executed by the net package
+due to a non-default GODEBUG=netedns0=... setting. Sourced from
+/godebug/non-default-behavior/netedns0:events
+- **go_godebug_non_default_behavior_panicnil_events_total:** The
+number of non-default behaviors executed by the runtime package
+due to a non-default GODEBUG=panicnil=... setting. Sourced from
+/godebug/non-default-behavior/panicnil:events
+- **go_godebug_non_default_behavior_randautoseed_events_total:**
+The number of non-default behaviors executed by the math/rand package
+due to a non-default GODEBUG=randautoseed=... setting. Sourced from
+/godebug/non-default-behavior/randautoseed:events
+- **go_godebug_non_default_behavior_tarinsecurepath_events_total:**
+The number of non-default behaviors executed by the archive/tar package
+due to a non-default GODEBUG=tarinsecurepath=... setting. Sourced from
+/godebug/non-default-behavior/tarinsecurepath:events
+- **go_godebug_non_default_behavior_tls10server_events_total:** The
+number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=tls10server=... setting. Sourced from
+/godebug/non-default-behavior/tls10server:events
+- **go_godebug_non_default_behavior_tls3des_events_total:** The
+number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=tls3des=... setting. Sourced from
+/godebug/non-default-behavior/tls3des:events
+- **go_godebug_non_default_behavior_tlsmaxrsasize_events_total:**
+The number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=tlsmaxrsasize=... setting. Sourced from
+/godebug/non-default-behavior/tlsmaxrsasize:events
+- **go_godebug_non_default_behavior_tlsrsakex_events_total:** The
+number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=tlsrsakex=... setting. Sourced from
+/godebug/non-default-behavior/tlsrsakex:events
+- **go_godebug_non_default_behavior_tlsunsafeekm_events_total:** The
+number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=tlsunsafeekm=... setting. Sourced from
+/godebug/non-default-behavior/tlsunsafeekm:events
+- **go_godebug_non_default_behavior_winreadlinkvolume_events_total:**
+The number of non-default behaviors executed by the os package due
+to a non-default GODEBUG=winreadlinkvolume=... setting. Sourced from
+/godebug/non-default-behavior/winreadlinkvolume:events
+- **go_godebug_non_default_behavior_winsymlink_events_total:**
+The number of non-default behaviors executed by the os package
+due to a non-default GODEBUG=winsymlink=... setting. Sourced from
+/godebug/non-default-behavior/winsymlink:events
+- **go_godebug_non_default_behavior_x509keypairleaf_events_total:**
+The number of non-default behaviors executed by the crypto/tls package
+due to a non-default GODEBUG=x509keypairleaf=... setting. Sourced from
+/godebug/non-default-behavior/x509keypairleaf:events
+- **go_godebug_non_default_behavior_x509negativeserial_events_total:**
+The number of non-default behaviors executed by the crypto/x509 package
+due to a non-default GODEBUG=x509negativeserial=... setting. Sourced from
+/godebug/non-default-behavior/x509negativeserial:events
+- **go_godebug_non_default_behavior_x509sha1_events_total:** The
+number of non-default behaviors executed by the crypto/x509 package
+due to a non-default GODEBUG=x509sha1=... setting. Sourced from
+/godebug/non-default-behavior/x509sha1:events
+- **go_godebug_non_default_behavior_x509usefallbackroots_events_total:**
+The number of non-default behaviors executed by the crypto/x509 package
+due to a non-default GODEBUG=x509usefallbackroots=... setting. Sourced from
+/godebug/non-default-behavior/x509usefallbackroots:events
+- **go_godebug_non_default_behavior_x509usepolicies_events_total:**
+The number of non-default behaviors executed by the crypto/x509 package
+due to a non-default GODEBUG=x509usepolicies=... setting. Sourced from
+/godebug/non-default-behavior/x509usepolicies:events
+- **go_godebug_non_default_behavior_zipinsecurepath_events_total:**
+The number of non-default behaviors executed by the archive/zip package
+due to a non-default GODEBUG=zipinsecurepath=... setting. Sourced from
+/godebug/non-default-behavior/zipinsecurepath:events
+- **go_goroutines:** Number of goroutines that currently exist.
+- **go_info:** Information about the Go environment.
+- **go_memory_classes_heap_free_bytes:** Memory that is completely free and
+eligible to be returned to the underlying system, but has not been. This
+metric is the runtime's estimate of free address space that is backed by
+physical memory. Sourced from /memory/classes/heap/free:bytes
+- **go_memory_classes_heap_objects_bytes:** Memory occupied by live
+objects and dead objects that have not yet been marked free by the garbage
+collector. Sourced from /memory/classes/heap/objects:bytes
+- **go_memory_classes_heap_released_bytes:** Memory that is completely free
+and has been returned to the underlying system. This metric is the runtime's
+estimate of free address space that is still mapped into the process, but is
+not backed by physical memory. Sourced from /memory/classes/heap/released:bytes
+- **go_memory_classes_heap_stacks_bytes:** Memory allocated from the heap that
+is reserved for stack space, whether or not it is currently in-use. Currently,
+this represents all stack memory for goroutines. It also includes all OS thread
+stacks in non-cgo programs. Note that stacks may be allocated differently in
+the future, and this may change. Sourced from /memory/classes/heap/stacks:bytes
+- **go_memory_classes_heap_unused_bytes:** Memory that is reserved for
+heap objects but is not currently used to hold heap objects. Sourced from
+/memory/classes/heap/unused:bytes
+- **go_memory_classes_metadata_mcache_free_bytes:** Memory that is
+reserved for runtime mcache structures, but not in-use. Sourced from
+/memory/classes/metadata/mcache/free:bytes
+- **go_memory_classes_metadata_mcache_inuse_bytes:** Memory that is occupied
+by runtime mcache structures that are currently being used. Sourced from
+/memory/classes/metadata/mcache/inuse:bytes
+- **go_memory_classes_metadata_mspan_free_bytes:** Memory that is
+reserved for runtime mspan structures, but not in-use. Sourced from
+/memory/classes/metadata/mspan/free:bytes
+- **go_memory_classes_metadata_mspan_inuse_bytes:** Memory that is occupied
+by runtime mspan structures that are currently being used. Sourced from
+/memory/classes/metadata/mspan/inuse:bytes
+- **go_memory_classes_metadata_other_bytes:** Memory that is
+reserved for or used to hold runtime metadata. Sourced from
+/memory/classes/metadata/other:bytes
+- **go_memory_classes_os_stacks_bytes:** Stack memory allocated by the
+underlying operating system. In non-cgo programs this metric is currently
+zero. This may change in the future.In cgo programs this metric includes
+OS thread stacks allocated directly from the OS. Currently, this only
+accounts for one stack in c-shared and c-archive build modes, and other
+sources of stacks from the OS are not measured. This too may change in the
+future. Sourced from /memory/classes/os-stacks:bytes
+- **go_memory_classes_other_bytes:** Memory used by execution trace buffers,
+structures for debugging the runtime, finalizer and profiler specials,
+and more. Sourced from /memory/classes/other:bytes
+- **go_memory_classes_profiling_buckets_bytes:** Memory that is
+used by the stack trace hash map used for profiling. Sourced from
+/memory/classes/profiling/buckets:bytes
+- **go_memory_classes_total_bytes:** All memory mapped by the Go runtime
+into the current process as read-write. Note that this does not include
+memory mapped by code called via cgo or via the syscall package. Sum of all
+metrics in /memory/classes. Sourced from /memory/classes/total:bytes
+- **go_memstats_alloc_bytes:** Number of bytes allocated in heap and currently
+in use. Equals to /memory/classes/heap/objects:bytes.
+- **go_memstats_alloc_bytes_total:** Total number of bytes allocated in heap
+until now, even if released already. Equals to /gc/heap/allocs:bytes.
+- **go_memstats_buck_hash_sys_bytes:** Number of bytes used by the profiling
+bucket hash table. Equals to /memory/classes/profiling/buckets:bytes.
+- **go_memstats_frees_total:** Total number of heap objects frees. Equals
+to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
+- **go_memstats_gc_sys_bytes:** Number of bytes used for garbage collection
+system metadata. Equals to /memory/classes/metadata/other:bytes.
+- **go_memstats_heap_alloc_bytes:** Number of heap bytes allocated
+and currently in use, same as go_memstats_alloc_bytes. Equals to
+/memory/classes/heap/objects:bytes.
+- **go_memstats_heap_idle_bytes:** Number of heap bytes waiting
+to be used. Equals to /memory/classes/heap/released:bytes +
+/memory/classes/heap/free:bytes.
+- **go_memstats_heap_inuse_bytes:** Number of heap bytes that
+are in use. Equals to /memory/classes/heap/objects:bytes +
+/memory/classes/heap/unused:bytes
+- **go_memstats_heap_objects:** Number of currently allocated objects. Equals
+to /gc/heap/objects:objects.
+gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects.
+- **go_memstats_heap_released_bytes:** Number of heap bytes released to
+OS. Equals to /memory/classes/heap/released:bytes.
+- **go_memstats_heap_sys_bytes:** Number of heap bytes obtained
+from system. Equals to /memory/classes/heap/objects:bytes +
+/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes +
+/memory/classes/heap/free:bytes.
+- **go_memstats_last_gc_time_seconds:** Number of seconds since 1970 of last
+garbage collection.
+- **go_memstats_mallocs_total:** Total number of heap objects allocated, both
+live and gc-ed. Semantically a counter version for go_memstats_heap_objects
+gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects.
+- **go_memstats_mcache_inuse_bytes:** Number of bytes in use by mcache
+structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
+- **go_memstats_mcache_sys_bytes:** Number of bytes used for mcache structures
+obtained from system. Equals to /memory/classes/metadata/mcache/inuse:bytes +
+/memory/classes/metadata/mcache/free:bytes.
+- **go_memstats_mspan_inuse_bytes:** Number of bytes in use by mspan
+structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
+- **go_memstats_mspan_sys_bytes:** Number of bytes used for mspan structures
+obtained from system. Equals to /memory/classes/metadata/mspan/inuse:bytes +
+/memory/classes/metadata/mspan/free:bytes.
+- **go_memstats_next_gc_bytes:** Number of heap bytes when next garbage
+collection will take place. Equals to /gc/heap/goal:bytes.
+- **go_memstats_other_sys_bytes:** Number of bytes used for other system
+allocations. Equals to /memory/classes/other:bytes.
+- **go_memstats_stack_inuse_bytes:** Number of bytes obtained
+from system for stack allocator in non-CGO environments. Equals to
+/memory/classes/heap/stacks:bytes.
+- **go_memstats_stack_sys_bytes:** Number of bytes obtained from system
+for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
+/memory/classes/os-stacks:bytes.
+- **go_memstats_sys_bytes:** Number of bytes obtained from system. Equals
+to /memory/classes/total:byte.
+- **go_sched_gomaxprocs_threads:** The current runtime.GOMAXPROCS setting,
+or the number of operating system threads that can execute user-level Go
+code simultaneously. Sourced from /sched/gomaxprocs:threads
+- **go_sched_goroutines_goroutines:** Count of live goroutines. Sourced
+from /sched/goroutines:goroutines
+- **go_sched_latencies_seconds:** Distribution of the time goroutines have
+spent in the scheduler in a runnable state before actually running. Bucket
+counts increase monotonically. Sourced from /sched/latencies:seconds
+- **go_sched_pauses_stopping_gc_seconds:** Distribution of individual
+GC-related stop-the-world stopping latencies. This is the time it takes from
+deciding to stop the world until all Ps are stopped. This is a subset of the
+total GC-related stop-the-world time (/sched/pauses/total/gc:seconds). During
+this time, some threads may be executing. Bucket counts increase
+monotonically. Sourced from /sched/pauses/stopping/gc:seconds
+- **go_sched_pauses_stopping_other_seconds:** Distribution of
+individual non-GC-related stop-the-world stopping latencies. This
+is the time it takes from deciding to stop the world until all Ps are
+stopped. This is a subset of the total non-GC-related stop-the-world time
+(/sched/pauses/total/other:seconds). During this time, some threads
+may be executing. Bucket counts increase monotonically. Sourced from
+/sched/pauses/stopping/other:seconds
+- **go_sched_pauses_total_gc_seconds:** Distribution of individual
+GC-related stop-the-world pause latencies. This is the time from deciding
+to stop the world until the world is started again. Some of this time
+is spent getting all threads to stop (this is measured directly in
+/sched/pauses/stopping/gc:seconds), during which some threads may
+still be running. Bucket counts increase monotonically. Sourced from
+/sched/pauses/total/gc:seconds
+- **go_sched_pauses_total_other_seconds:** Distribution of individual
+non-GC-related stop-the-world pause latencies. This is the time from
+deciding to stop the world until the world is started again. Some of
+this time is spent getting all threads to stop (measured directly
+in /sched/pauses/stopping/other:seconds). Bucket counts increase
+monotonically. Sourced from /sched/pauses/total/other:seconds
+- **go_sync_mutex_wait_total_seconds_total:** Approximate cumulative
+time goroutines have spent blocked on a sync.Mutex, sync.RWMutex, or
+runtime-internal lock. This metric is useful for identifying global
+changes in lock contention. Collect a mutex or block profile using the
+runtime/pprof package for more detailed contention data. Sourced from
+/sync/mutex/wait/total:seconds
+- **go_threads:** Number of OS threads created.
+
+#### Hidden Metrics
+
+- **hidden_metrics_total:** The count of hidden metrics.
+
+#### Process Metrics
+
+- **process_cpu_seconds_total:** Total user and system CPU time spent
+in seconds.
+- **process_max_fds:** Maximum number of open file descriptors.
+- **process_network_receive_bytes_total:** Number of bytes received by the
+process over the network.
+- **process_network_transmit_bytes_total:** Number of bytes sent by the
+process over the network.
+- **process_open_fds:** Number of open file descriptors.
+- **process_resident_memory_bytes:** Resident memory size in bytes.
+- **process_start_time_seconds:** Start time of the process since unix epoch
+in seconds.
+- **process_virtual_memory_bytes:** Virtual memory size in bytes.
+- **process_virtual_memory_max_bytes:** Maximum amount of virtual memory
+available in bytes.
+
+#### Registered Metrics
+
+- **registered_metrics_total:** The count of registered metrics broken by
+stability level and deprecation version.
+
+#### Workqueue Metrics
+
+- **workqueue_adds_total:** Total number of adds handled by workqueue
+- **workqueue_depth:** Current depth of workqueue
+- **workqueue_longest_running_processor_seconds:** How many seconds has the
+longest running processor for workqueue been running.
+- **workqueue_queue_duration_seconds:** How long in seconds an item stays
+in workqueue before being requested.
+- **workqueue_retries_total:** Total number of retries handled by workqueue
+- **workqueue_unfinished_work_seconds:** How many seconds of work has
+done that is in progress and hasn't been observed by work_duration. Large
+values indicate stuck threads. One can deduce the number of stuck threads
+by observing the rate at which this increases.
+- **workqueue_work_duration_seconds:** How long in seconds processing an
+item from workqueue takes.
diff --git a/content/docs/v2.2.0-alpha.2/docs/secondary-network.md b/content/docs/v2.2.0-alpha.2/docs/secondary-network.md
new file mode 100644
index 00000000..4819d67d
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/secondary-network.md
@@ -0,0 +1,172 @@
+# Antrea Secondary Network Support
+
+Antrea can work with Multus, in which case Antrea is the primary CNI of the
+Kubernetes cluster and provisions the "primary" network interfaces of Pods;
+while Multus manages secondary networks and executes other CNIs to create
+secondary network interfaces of Pods. The [Antrea + Multus guide](cookbooks/multus)
+talks about how to use Antrea with Multus.
+
+Starting with Antrea v1.15, Antrea can also provision secondary network
+interfaces and connect them to VLAN networks. This document describes Antrea's
+native support for VLAN secondary networks.
+
+## Prerequisites
+
+Native secondary network support is still an alpha feature and is disabled by
+default. To use the feature, the `SecondaryNetwork` feature gate must be enabled
+in the `antrea-agent` configuration. If you need IPAM for the secondary
+interfaces, you should also enable the `AntreaIPAM` feature gate in both
+`antrea-agent` and `antrea-controller` configuration. At the moment, Antrea IPAM
+is the only available IPAM option for secondary networks managed by Antrea. The
+`antrea-config` ConfigMap with the two feature gates enables is like the
+following:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ AntreaIPAM: true
+ antrea-agent.conf: |
+ featureGates:
+ AntreaIPAM: true
+ SecondaryNetwork: true
+```
+
+Antrea leverages the `NetworkAttachmentDefinition` CRD from [Kubernetes Network
+Plumbing Working Group](https://github.com/k8snetworkplumbingwg/multi-net-spec)
+to define secondary networks. You can import the CRD to your cluster using the
+following command:
+
+```bash
+kubectl apply -f https://github.com/k8snetworkplumbingwg/network-attachment-definition-client/raw/master/artifacts/networks-crd.yaml
+```
+
+## Secondary OVS bridge configuration
+
+A VLAN secondary interface will be connected to a separate OVS bridge on the
+Node. You can specify the secondary OVS bridge configuration in the
+`antrea-agent` configuration, and `antrea-agent` will automatically create the
+OVS bridge based on the configuration. For example, the following configuration
+will create an OVS bridge named `br-secondary`, with a physical interface
+`eth1`.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ secondaryNetwork:
+ ovsBridges: [{"bridgeName": "br-secondary", "physicalInterfaces": ["eth1"]}]
+```
+
+At the moment, Antrea supports only a single OVS bridge for secondary networks,
+and supports up to eight physical interfaces on the bridge. The physical
+interfaces cannot be the Node's management interface, otherwise the Node's
+management network connectivity can be broken after `antrea-agent` creates the
+OVS bridge and moves the management interface to the bridge.
+
+## Secondary VLAN network configuration
+
+A secondary VLAN network is defined by a NetworkAttachmentDefinition CR. For
+example, the following NetworkAttachmentDefinition defines a VLAN network
+`vlan100`.
+
+```yaml
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: vlan100
+spec:
+ config: '{
+ "cniVersion": "0.3.0",
+ "type": "antrea",
+ "networkType": "vlan",
+ "mtu": 1500,
+ "vlan": 100,
+ "ipam": {
+ "type": "antrea",
+ "ippools": ["vlan100-ipv4", "vlan100-ipv6"]
+ }
+ }'
+```
+
+`antrea-agent` will connect Pod secondary interfaces belonging to a VLAN network
+to the secondary OVS bridge on the Node. If a non-zero VLAN is specified in the
+network's `config`, `antrea-agent` will configure the VLAN ID on the OVS port,
+so the interface's traffic will be isolated within the VLAN. And before the
+traffic is forwarded out of the Node via the secondary bridge's physical
+interface, OVS will insert the VLAN tag in the packets.
+
+A few extra notes about the NetworkAttachmentDefinition `config` fields:
+
+* `type` - must be set to `antrea`.
+* `networkType` - the only supported network type is `vlan` as of now.
+* `mtu` - defaults to 1500 if not set.
+* `vlan` - can be set to 0 or a valid VLAN ID (1 - 4094). Defaults to 0. The
+VLAN ID can also be specified as part of the spec of an IPPool referenced in the
+`ipam` section, but `vlan` in NetworkAttachmentDefinition `config` will override
+the VLAN in IPPool(s) if both are set.
+* `ipam` - it is optional. If not set, the secondary interfaces created for the
+network won't have an IP address allocated. For more information about secondary
+network IPAM configuration, please refer to the [Antrea IPAM document](antrea-ipam.md#ipam-for-secondary-network).
+
+## Pod secondary interface configuration
+
+You can create a Pod with secondary network interfaces by adding the
+`k8s.v1.cni.cncf.io/networks` annotation to the Pod. The following example Pod
+includes two secondary interfaces, one in network `vlan100` which should be
+created in the same Namespace as the Pod, the other in network `vlan200` which
+is created in Namespace `networks`.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: sample-pod
+ labels:
+ app: antrea-secondary-network-demo
+ annotations:
+ k8s.v1.cni.cncf.io/networks: '[
+ {"name": "vlan100"},
+ {"name": vlan200, "namespace": "networks", "interface": "eth200"}
+ ]'
+spec:
+ containers:
+ - name: toolbox
+ image: antrea/toolbox:latest
+```
+
+If the Pod has only a single secondary network interface, you can also set
+the `k8s.v1.cni.cncf.io/networks` annotation to ``,
+or `/` if the NetworkAttachmentDefinition CR is created
+in a different Namespace from the Pod's Namespace, or
+`@` if you want to specify the Pod interface name.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: sample-pod
+ labels:
+ app: antrea-secondary-network-demo
+ annotations:
+ k8s.v1.cni.cncf.io/networks: networks/vlan200@eth200
+spec:
+ containers:
+ - name: toolbox
+ image: antrea/toolbox:latest
+```
+
+**At the moment, we do NOT support annotation update / removal: when the
+ annotation is added to the Pod for the first time (e.g., when creating the
+ Pod), we will configure the secondary network interfaces accordingly, and no
+ change is possible after that, until the Pod is deleted.**
diff --git a/content/docs/v2.2.0-alpha.2/docs/securing-control-plane.md b/content/docs/v2.2.0-alpha.2/docs/securing-control-plane.md
new file mode 100644
index 00000000..0269a1f8
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/securing-control-plane.md
@@ -0,0 +1,169 @@
+# Securing Control Plane
+
+All API communication between Antrea control plane components is encrypted with
+TLS. The TLS certificates that Antrea requires can be automatically generated.
+You can also provide your own certificates. This page explains the certificates
+that Antrea requires and how to configure and rotate them for Antrea.
+
+## Table of Contents
+
+
+- [What certificates are required by Antrea](#what-certificates-are-required-by-antrea)
+- [How certificates are used by Antrea](#how-certificates-are-used-by-antrea)
+- [Providing your own certificates](#providing-your-own-certificates)
+ - [Using kubectl](#using-kubectl)
+ - [Using cert-manager](#using-cert-manager)
+- [Certificate rotation](#certificate-rotation)
+
+
+## What certificates are required by Antrea
+
+Currently Antrea only requires a single server certificate for the
+antrea-controller API server endpoint, which is for the following communication:
+
+- The antrea-agents talks to the antrea-controller for fetching the computed
+ NetworkPolicies
+- The kube-aggregator (i.e. kube-apiserver) talks to the antrea-controller for
+ proxying antctl's requests (when run in "controller" mode)
+
+Antrea doesn't require client certificates for its own components as it
+delegates authentication and authorization to the Kubernetes API, using
+Kubernetes [ServiceAccount tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens)
+for client authentication.
+
+## How certificates are used by Antrea
+
+By default, antrea-controller generates a self-signed certificate. You can
+override the behavior by [providing your own certificates](#providing-your-own-certificates).
+Either way, the antrea-controller will distribute the CA certificate as a
+ConfigMap named `antrea-ca` in the Antrea deployment Namespace and inject it
+into the APIServices resources created by Antrea in order to allow its clients
+(i.e. antrea-agent, kube-apiserver) to perform authentication.
+
+Typically, clients that wish to access the antrea-controller API can
+authenticate the server by validating against the CA certificate published in
+the `antrea-ca` ConfigMap.
+
+## Providing your own certificates
+
+Since Antrea v0.7.0, you can provide your own certificates to Antrea. To do so,
+you must set the `selfSignedCert` field of `antrea-controller.conf` to `false`,
+so that the antrea-controller will read the certificate key pair from the
+`antrea-controller-tls` Secret. The example manifests and descriptions below
+assume Antrea is deployed in the `kube-system` Namespace. If you deploy Antrea
+in a different Namepace, please update the Namespace name in the manifests
+accordingly.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ app: antrea
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ selfSignedCert: false
+```
+
+You can generate the required certificate manually, or through
+[cert-manager](https://cert-manager.io/docs/). Either way, the certificate must
+be issued with the following key usages and DNS names:
+
+X509 key usages:
+
+- digital signature
+- key encipherment
+- server auth
+
+DNS names:
+
+- antrea.kube-system.svc
+- antrea.kube-system.svc.cluster.local
+
+**Note: It assumes you are using `cluster.local` as the cluster domain, you
+should replace it with the actual one of your Kubernetes cluster.**
+
+You can then create the `antrea-controller-tls` Secret with the certificate key
+pair and the CA certificate in the following form:
+
+```yaml
+apiVersion: v1
+kind: Secret
+# The type can also be Opaque.
+type: kubernetes.io/tls
+metadata:
+ name: antrea-controller-tls
+ namespace: kube-system
+data:
+ ca.crt:
+ tls.crt:
+ tls.key:
+```
+
+### Using kubectl
+
+You can use `kubectl apply -f ` to create the above secret,
+or use `kubectl create secret`:
+
+```bash
+kubectl create secret generic antrea-controller-tls -n kube-system \
+ --from-file=ca.crt= --from-file=tls.crt= --from-file=tls.key=
+```
+
+### Using cert-manager
+
+If you set up [cert-manager](https://cert-manager.io/docs/) to manage your
+certificates, it can be used to issue and renew the certificate required by
+Antrea.
+
+To get started, follow the [cert-manager installation documentation](
+https://cert-manager.io/docs/installation/kubernetes/) to deploy cert-manager
+and configure `Issuer` or `ClusterIssuer` resources.
+
+The `Certificate` should be created in the `kube-system` namespace. For example,
+A `Certificate` may look like:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: antrea-controller-tls
+ namespace: kube-system
+spec:
+ secretName: antrea-controller-tls
+ commonName: antrea
+ dnsNames:
+ - antrea.kube-system.svc
+ - antrea.kube-system.svc.cluster.local
+ usages:
+ - digital signature
+ - key encipherment
+ - server auth
+ issuerRef:
+ # Replace the name with the real Issuer you configured.
+ name: ca-issuer
+ # We can reference ClusterIssuers by changing the kind here.
+ # The default value is Issuer (i.e. a locally namespaced Issuer)
+ kind: Issuer
+```
+
+Once the `Certificate` is created, you should see the `antrea-controller-tls`
+Secret created in the `kube-system` Namespace.
+
+**Note it may take up to 1 minute for Kubernetes to propagate the Secret update
+to the antrea-controller Pod if the Pod starts before the Secret is created.**
+
+## Certificate rotation
+
+Antrea v0.7.0 and higher supports certificate rotation. It can be achieved by
+simply updating the `antrea-controller-tls` Secret. The
+antrea-controller will react to the change, updating its serving certificate and
+re-distributing the latest CA certificate (if applicable).
+
+If you are using cert-manager to issue the certificate, it will renew the
+certificate before expiry and update the Secret automatically.
+
+If you are using certificates signed by Antrea, Antrea will rotate the
+certificate automatically before expiration.
diff --git a/content/docs/v2.2.0-alpha.2/docs/security.md b/content/docs/v2.2.0-alpha.2/docs/security.md
new file mode 100644
index 00000000..81e2fe68
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/security.md
@@ -0,0 +1,185 @@
+# Security Recommendations
+
+This document describes some security recommendations when deploying Antrea in a
+cluster, and in particular a [multi-tenancy
+cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview#what_is_multi-tenancy).
+
+To report a vulnerability in Antrea, please refer to
+[SECURITY.md](../SECURITY.md).
+
+For information about securing Antrea control-plane communications, refer to
+this [document](securing-control-plane.md).
+
+## Protecting Your Cluster Against Privilege Escalations
+
+### Antrea Agent
+
+Like all other K8s Network Plugins, Antrea runs an agent (the Antrea Agent) on
+every Node on the cluster, using a K8s DaemonSet. And just like for other K8s
+Network Plugins, this agent requires a specific set of permissions which grant
+it access to the K8s API using
+[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). These
+permissions are required to implement the different features offered by
+Antrea. If any Node in the cluster happens to become compromised (e.g., by an
+escaped container) and the token for the `antrea-agent` ServiceAccount is
+harvested by the attacker, some of these permissions can be leveraged to
+negatively affect other workloads running on the cluster. In particular, the
+Antrea Agent is granted the following permissions:
+
+* `patch` the `pods/status` resources: a successful attacker could abuse this
+ permission to re-label Pods to facilitate [confused deputy
+ attacks](https://en.wikipedia.org/wiki/Confused_deputy_problem) against
+ built-in controllers. For example, making a Pod match a Service selector in
+ order to man-in-the-middle (MITM) the Service traffic, or making a Pod match a
+ ReplicaSet selector so that the ReplicaSet controller deletes legitimate
+ replicas.
+* `patch` the `nodes/status` resources: a successful attacker could abuse this
+ permission to affect scheduling by modifying Node fields like labels,
+ capacity, and conditions.
+
+In both cases, the Antrea Agent only requires the ability to mutate the
+annotations field for all Pods and Nodes, but with K8s RBAC, the lowest
+permission level that we can grant the Antrea Agent to satisfy this requirement
+is the `patch` verb for the `status` subresource for Pods and Nodes (which also
+provides the ability to mutate labels).
+
+To mitigate the risk presented by these permissions in case of a compromised
+token, we suggest that you use
+[Gatekeeper](https://github.com/open-policy-agent/gatekeeper), with the
+appropriate policy. We provide the following Gatekeeper policy, consisting of a
+`ConstraintTemplate` and the corresponding `Constraint`. When using this policy,
+it will no longer be possible for the `antrea-agent` ServiceAccount to mutate
+anything besides annotations for the Pods and Nodes resources.
+
+```yaml
+# ConstraintTemplate
+apiVersion: templates.gatekeeper.sh/v1
+kind: ConstraintTemplate
+metadata:
+ name: antreaagentstatusupdates
+ annotations:
+ description: >-
+ Disallows unauthorized updates to status subresource by Antrea Agent
+ Only annotations can be mutated
+spec:
+ crd:
+ spec:
+ names:
+ kind: AntreaAgentStatusUpdates
+ targets:
+ - target: admission.k8s.gatekeeper.sh
+ rego: |
+ package antreaagentstatusupdates
+ username := object.get(input.review.userInfo, "username", "")
+ targetUsername := "system:serviceaccount:kube-system:antrea-agent"
+
+ allowed_mutation(object, oldObject) {
+ object.status == oldObject.status
+ object.metadata.labels == oldObject.metadata.labels
+ }
+
+ violation[{"msg": msg}] {
+ username == targetUsername
+ input.review.operation == "UPDATE"
+ input.review.requestSubResource == "status"
+ not allowed_mutation(input.review.object, input.review.oldObject)
+ msg := "Antrea Agent is not allowed to mutate this field"
+ }
+```
+
+```yaml
+# Constraint
+apiVersion: constraints.gatekeeper.sh/v1beta1
+kind: AntreaAgentStatusUpdates
+metadata:
+ name: antrea-agent-status-updates
+spec:
+ match:
+ kinds:
+ - apiGroups: [""]
+ kinds: ["Pod", "Node"]
+```
+
+***Please ensure that the `ValidatingWebhookConfiguration` for your Gatekeeper
+ installation enables policies to be applied on the `pods/status` and
+ `nodes/status` subresources, which may not be the case by default.***
+
+As a reference, the following `ValidatingWebhookConfiguration` rule will cause
+policies to be applied to all resources and their subresources:
+
+```yaml
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*/*'
+ scope: '*'
+```
+
+while the following rule will cause policies to be applied to all resources, but
+not their subresources:
+
+```yaml
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*'
+ scope: '*'
+```
+
+### Antrea Controller
+
+The Antrea Controller, which runs as a single-replica Deployment, enjoys higher
+level permissions than the Antrea Agent. We recommend for production clusters
+running Antrea to schedule the `antrea-controller` Pod on a "secure" Node, which
+could for example be the Node (or one of the Nodes) running the K8s
+control-plane.
+
+## Protecting Access to Antrea Configuration Files
+
+Antrea relies on persisting files on each K8s Node's filesystem, in order to
+minimize disruptions to network functions across Antrea Agent restarts, in
+particular during an upgrade. All these files are located under
+`/var/run/antrea/`. The most notable of these files is
+`/var/run/antrea/openvswitch/conf.db`, which stores the Open vSwitch
+database. Prior to Antrea v0.10, any user had read access to the file on the
+host (permissions were set to `0644`). Starting with v0.10, this is no longer
+the case (permissions are now set to `0640`). Starting with v0.13, we further
+remove access to the `/var/run/antrea/` directory for non-root users
+(permissions are set to `0750`).
+
+If a malicious Pod can gain read access to this file, or, prior to Antrea v0.10,
+if an attacker can gain access to the host, they can potentially access
+sensitive information stored in the database, most notably the Pre-Shared Key
+(PSK) used to configure [IPsec tunnels](traffic-encryption.md), which is stored
+in plaintext in the database. If a PSK is leaked, an attacker can mount a
+man-in-the-middle attack and intercept tunnel traffic.
+
+If a malicious Pod can gain write access to this file, it can modify the
+contents of the database, and therefore impact network functions.
+
+Administrators of multi-tenancy clusters running Antrea should take steps to
+restrict the access of Pods to `/var/run/antrea/`. One way to achieve this is to
+use a
+[PodSecurityPolicy](https://kubernetes.io/docs/concepts/policy/pod-security-policy)
+and restrict the set of allowed
+[volumes](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems)
+to exclude `hostPath`. **This guidance applies to all multi-tenancy clusters and
+is not specific to Antrea.** To quote the K8s documentation:
+
+> There are many ways a container with unrestricted access to the host
+ filesystem can escalate privileges, including reading data from other
+ containers, and abusing the credentials of system services, such as Kubelet.
+
+An alternative solution to K8s PodSecurityPolicies is to use
+[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) to constrain usage
+of the host filesystem by Pods.
diff --git a/content/docs/v2.2.0-alpha.2/docs/service-loadbalancer.md b/content/docs/v2.2.0-alpha.2/docs/service-loadbalancer.md
new file mode 100644
index 00000000..3210ab51
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/service-loadbalancer.md
@@ -0,0 +1,428 @@
+# Service of type LoadBalancer
+
+## Table of Contents
+
+
+- [Service external IP management by Antrea](#service-external-ip-management-by-antrea)
+ - [Preparation](#preparation)
+ - [Configuration](#configuration)
+ - [Enable Service external IP management feature](#enable-service-external-ip-management-feature)
+ - [Create an ExternalIPPool custom resource](#create-an-externalippool-custom-resource)
+ - [Create a Service of type LoadBalancer](#create-a-service-of-type-loadbalancer)
+ - [Validate Service external IP](#validate-service-external-ip)
+ - [Limitations](#limitations)
+- [Using MetalLB with Antrea](#using-metallb-with-antrea)
+ - [Install MetalLB](#install-metallb)
+ - [Configure MetalLB with layer 2 mode](#configure-metallb-with-layer-2-mode)
+ - [Configure MetalLB with BGP mode](#configure-metallb-with-bgp-mode)
+- [Interoperability with kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode)
+ - [Issue with Antrea Egress](#issue-with-antrea-egress)
+
+
+In Kubernetes, implementing Services of type LoadBalancer usually requires
+an external load balancer. On cloud platforms (including public clouds
+and platforms like NSX-T) that support load balancers, Services of type
+LoadBalancer can be implemented by the Kubernetes Cloud Provider, which
+configures the cloud load balancers for the Services. However, the load
+balancer support is not available on all platforms, or in some cases, it is
+complex or has extra cost to deploy external load balancers. This document
+describes two options for supporting Services of type LoadBalancer with Antrea,
+without an external load balancer:
+
+1. Using Antrea's built-in external IP management for Services of type
+LoadBalancer
+2. Leveraging [MetalLB](https://metallb.universe.tf)
+
+## Service external IP management by Antrea
+
+Antrea supports external IP management for Services of type LoadBalancer
+since version 1.5, which can work together with Antrea Proxy or
+`kube-proxy` to implement Services of type LoadBalancer, without requiring an
+external load balancer. With the external IP management feature, Antrea can
+allocate an external IP for a Service of type LoadBalancer from an
+[ExternalIPPool](egress.md#the-externalippool-resource), and select a Node
+based on the ExternalIPPool's NodeSelector to host the external IP. Antrea
+configures the Service's external IP on the selected Node, and thus Service
+requests to the external IP will get to the Node, and they are then handled by
+Antrea Proxy or `kube-proxy` on the Node and distributed to the Service's
+Endpoints. Antrea also implements a Node failover mechanism for Service
+external IPs. When Antrea detects a Node hosting an external IP is down, it
+will move the external IP to another available Node of the ExternalIPPool.
+
+### Preparation
+
+If you are using `kube-proxy` in IPVS mode, you need to make sure `strictARP` is
+enabled in the `kube-proxy` configuration. For more information about how to
+configure `kube-proxy`, please refer to the [Interoperability with kube-proxy
+IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section.
+
+If you are using `kube-proxy` iptables mode or [Antrea Proxy with `proxyAll`](antrea-proxy.md#antrea-proxy-with-proxyall),
+no extra configuration change is needed.
+
+### Configuration
+
+#### Enable Service external IP management feature
+
+At this moment, external IP management for Services is an alpha feature of
+Antrea. The `ServiceExternalIP` feature gate of `antrea-agent` and
+`antrea-controller` must be enabled for the feature to work. You can enable
+the `ServiceExternalIP` feature gate in the `antrea-config` ConfigMap in
+the Antrea deployment YAML:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ ServiceExternalIP: true
+ antrea-controller.conf: |
+ featureGates:
+ ServiceExternalIP: true
+```
+
+The feature works with both Antrea Proxy and `kube-proxy`, including the
+following configurations:
+
+- Antrea Proxy without `proxyAll` enabled - this is `antrea-agent`'s default
+configuration, in which `kube-proxy` serves the request traffic for Services
+of type LoadBalancer (while Antrea Proxy handles Service requests from Pods).
+- Antrea Proxy with `proxyAll` enabled - in this case, Antrea Proxy handles
+all Service traffic, including Services of type LoadBalancer.
+- Antrea Proxy disabled - `kube-proxy` handles all Service traffic, including
+Services of type LoadBalancer.
+
+#### Create an ExternalIPPool custom resource
+
+Service external IPs are allocated from an ExternalIPPool, which defines a pool
+of external IPs and the set of Nodes to which the external IPs can be assigned.
+To learn more information about ExternalIPPool, please refer to [the Egress
+documentation](egress.md#the-externalippool-resource). The example below
+defines an ExternalIPPool with IP range "10.10.0.2 - 10.10.0.10", and it
+selects the Nodes with label "network-role: ingress-node" to host the external
+IPs:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: service-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ nodeSelector:
+ matchLabels:
+ network-role: ingress-node
+```
+
+#### Create a Service of type LoadBalancer
+
+For Antrea to manage the externalIP for a Service of type LoadBalancer, the
+Service should be annotated with `service.antrea.io/external-ip-pool`. For
+example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ type: LoadBalancer
+```
+
+You can also request a particular IP from an ExternalIPPool by setting
+the loadBalancerIP field in the Service spec to that specific IP available
+in the ExternalIPPool, Antrea will allocate the IP from the ExternalIPPool
+for the Service. For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ loadBalancerIP: "10.10.0.2"
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ type: LoadBalancer
+```
+
+By default, Antrea doesn't allocate a single IP to multiple Services. Before
+Antrea v2.1, if multiple Services requested the same IP, only one of them would
+get the IP assigned. Starting with Antrea v2.1, to share an IP between multiple
+Services, you can annotate the Services with
+`service.antrea.io/allow-shared-load-balancer-ip: true` when requesting a
+particular IP. Note that the IP will only be shared between Services having the
+annotation. If not all Services are annotated, the IP may either be allocated
+to one of the unannotated Services or shared between the annotated Services,
+depending on the order in which they are processed. The annotation only takes
+effect during the IP allocation phase. Once the IP has been allocated, removing
+this annotation from a Service will not result in the IP being reclaimed from
+it or other Services.
+
+For example, the following two Services will share an IP:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service-1
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+ service.antrea.io/allow-shared-load-balancer-ip: "true"
+spec:
+ selector:
+ app: MyApp1
+ loadBalancerIP: "10.10.0.2"
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ type: LoadBalancer
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service-2
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+ service.antrea.io/allow-shared-load-balancer-ip: "true"
+spec:
+ selector:
+ app: MyApp2
+ loadBalancerIP: "10.10.0.2"
+ ports:
+ - protocol: TCP
+ port: 8080
+ targetPort: 8080
+ type: LoadBalancer
+```
+
+Note that sharing a LoadBalancer IP between multiple Services only works under
+the following conditions:
+
+* The Services use different ports.
+* The Services use the `Cluster` external traffic policy. Sharing a
+ LoadBalancer IP between Services using the `Local` external traffic policy
+ can also work if they have identical Endpoints. However, in such cases, it
+ may be preferable to consolidate the Services into a single Service.
+
+Otherwise, the datapath may not work even though the IP is allocated to the
+Services successfully.
+
+#### Validate Service external IP
+
+Once Antrea allocates an external IP for a Service of type LoadBalancer, it
+will set the IP to the `loadBalancer.ingress` field in the Service resource
+`status`. For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ clusterIP: 10.96.0.11
+ type: LoadBalancer
+status:
+ loadBalancer:
+ ingress:
+ - ip: 10.10.0.2
+ hostname: node-1
+```
+
+You can validate that the Service can be accessed from the client using the
+`:` (`10.10.0.2:80/TCP` in the above example).
+
+### Limitations
+
+As described above, the Service externalIP management by Antrea configures a
+Service's external IP to a Node, so that the Node can receive Service requests.
+However, this requires that the externalIP on the Node be reachable through the
+Node network. The simplest way to achieve this is to reserve a range of IPs
+from the Node network subnet, and define Service ExternalIPPools with the
+reserved IPs, when the Nodes are connected to a layer 2 subnet. Or, another
+possible way might be to manually configure Node network routing (e.g. by
+adding a static route entry to the underlay router) to route the Service
+traffic to the Node that hosts the Service's externalIP.
+
+As of now, Antrea supports Service externalIP management only on Linux Nodes.
+Windows Nodes are not supported yet.
+
+## Using MetalLB with Antrea
+
+MetalLB also implements external IP management for Services of type
+LoadBalancer, and it can be deployed to a Kubernetes cluster with Antrea.
+MetalLB supports two modes - layer 2 mode and BGP mode - to advertise an
+Service external IP to the Node network. The layer 2 mode is similar to what
+Antrea external IP management implements and has the same limitation that the
+external IPs must be allocated from the Node network subnet. The BGP mode
+leverages BGP to advertise external IPs to the Node network router. It does
+not have the layer 2 subnet limitation, but requires the Node network to
+support BGP.
+
+MetalLB will automatically allocate external IPs for every Service of type
+LoadBalancer, and it sets the allocated IP to the `loadBalancer.ingress` field
+in the Service resource `status`. MetalLB also supports user specified `loadBalancerIP`
+in the Service spec. For more information, please refer to the [MetalLB usage](https://metallb.universe.tf/usage).
+
+To learn more about MetalLB concepts and functionalities, you can read the
+[MetalLB concepts](https://metallb.universe.tf/concepts).
+
+### Install MetalLB
+
+You can run the following commands to install MetalLB using the YAML manifests:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
+```
+
+The commands will deploy MetalLB version 0.13.11 into Namespace
+`metallb-system`. You can also refer to this [MetalLB installation
+guide](https://metallb.universe.tf/installation) for other ways of installing
+MetalLB.
+
+As MetalLB will allocate external IPs for all Services of type LoadBalancer,
+once it is running, the Service external IP management feature of Antrea should
+not be enabled to avoid conflicts with MetalLB. You can deploy Antrea with the
+default configuration (in which the `ServiceExternalIP` feature gate of
+`antrea-agent` is set to `false`). MetalLB can work with both Antrea Proxy and
+`kube-proxy` configurations of `antrea-agent`.
+
+### Configure MetalLB with layer 2 mode
+
+Similar to the case of Antrea Service external IP management, MetalLB layer 2
+mode also requires `kube-proxy`'s `strictARP` configuration to be enabled, when
+you are using `kube-proxy` IPVS. Please refer to the [Interoperability with
+kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section for
+more information.
+
+MetalLB is configured through Custom Resources (since v0.13). To configure
+MetalLB to work in the layer 2 mode, you need to create an `L2Advertisement`
+resource, as well as an `IPAddressPool` resource, which provides the IP ranges
+to allocate external IPs from. The IP ranges should be from the Node network
+subnet.
+
+For example:
+
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: first-pool
+ namespace: metallb-system
+spec:
+ addresses:
+ - 10.10.0.2-10.10.0.10
+---
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: example
+ namespace: metallb-system
+```
+
+### Configure MetalLB with BGP mode
+
+The BGP mode of MetalLB requires more configuration parameters to establish BGP
+peering to the router. The example resources below configure MetalLB using AS
+number 64500 to connect to peer router 10.0.0.1 with AS number 64501:
+
+```yaml
+apiVersion: metallb.io/v1beta2
+kind: BGPPeer
+metadata:
+ name: sample
+ namespace: metallb-system
+spec:
+ myASN: 64500
+ peerASN: 64501
+ peerAddress: 10.0.0.1
+---
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: first-pool
+ namespace: metallb-system
+spec:
+ addresses:
+ - 10.10.0.2-10.10.0.10
+---
+apiVersion: metallb.io/v1beta1
+kind: BGPAdvertisement
+metadata:
+ name: example
+ namespace: metallb-system
+```
+
+In addition to the basic layer 2 and BGP mode configurations described in this
+document, MetalLB supports a few more advanced BGP configurations and supports
+configuring multiple IP pools which can use different modes. For more
+information, please refer to the [MetalLB configuration guide](https://metallb.universe.tf/configuration).
+
+## Interoperability with kube-proxy IPVS mode
+
+Both Antrea Service external IP management and MetalLB layer 2 mode require
+`kube-proxy`'s `strictARP` configuration to be enabled, to work with
+`kube-proxy` in IPVS mode. You can check the `strictARP` configuration in the
+`kube-proxy` ConfigMap:
+
+```bash
+$ kubectl describe configmap -n kube-system kube-proxy | grep strictARP
+ strictARP: false
+```
+
+You can set `strictARP` to `true` by editing the `kube-proxy` ConfigMap:
+
+```bash
+kubectl edit configmap -n kube-system kube-proxy
+```
+
+Or, simply run the following command to set it:
+
+```bash
+$ kubectl get configmap kube-proxy -n kube-system -o yaml | \
+ sed -e "s/strictARP: false/strictARP: true/" | \
+ kubectl apply -f - -n kube-system
+```
+
+Last, to check the change is made:
+
+```bash
+$ kubectl describe configmap -n kube-system kube-proxy | grep strictARP
+ strictARP: true
+```
+
+### Issue with Antrea Egress
+
+If you are using Antrea v1.7.0 or later, please ignore the issue. The previous
+implementation of Antrea Egress before v1.7.0 does not work with the `strictARP`
+configuration of `kube-proxy`. It means Antrea Egress cannot work together with
+Service external IP management or MetalLB layer 2 mode, when `kube-proxy` IPVS
+is used. This issue was fixed in Antrea v1.7.0.
diff --git a/content/docs/v2.2.0-alpha.2/docs/support-bundle-guide.md b/content/docs/v2.2.0-alpha.2/docs/support-bundle-guide.md
new file mode 100644
index 00000000..a2ffb323
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/support-bundle-guide.md
@@ -0,0 +1,263 @@
+# Support Bundle User Guide
+
+## What is Support Bundle
+
+Antrea supports collecting support bundle tarballs, which include the information
+from Antrea Controller and Antrea Agent. The collected information can help
+debugging issues in the Kubernetes cluster.
+
+**Be aware that the generated support bundle includes a lot of information,
+including logs, so please review the contents before sharing it on Github
+and ensure that you do not share any sensitive information.**
+
+There are two ways of generating support bundles. Firstly, you can run `antctl supportbundle`
+directly in the Antrea Agent Pod, Antrea Controller Pod, or on a host with a
+`kubeconfig` file for the target cluster. Secondly, you can also apply
+`SupportBundleCollection` CRs to create support bundles for K8s Nodes
+or external Nodes. We name this feature as `SupportBundleCollection` in Antrea.
+The details are provided in section [Usage examples](#usage-examples).
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [The SupportBundleCollection CRD](#the-supportbundlecollection-crd)
+- [Usage examples](#usage-examples)
+ - [Running antctl commands](#running-antctl-commands)
+ - [Applying SupportBundleCollection CR](#applying-supportbundlecollection-cr)
+- [List of collected items](#list-of-collected-items)
+- [Limitations](#limitations)
+
+
+## Prerequisites
+
+The `antctl supportbundle` command is supported in Antrea since version 0.7.0.
+
+The `SupportBundleCollection` CRD is introduced in Antrea v1.10.0 as an alpha
+feature. The feature gate must be enabled in both antrea-controller and
+antrea-agent configurations. If you plan to collect support bundle on an external
+Node, you should enable it in the configuration on the external Node as well.
+
+```yaml
+ antrea-agent.conf: |
+ featureGates:
+ # Enable collecting support bundle files with SupportBundleCollection CRD.
+ SupportBundleCollection: true
+```
+
+```yaml
+ antrea-controller.conf: |
+ featureGates:
+ # Enable collecting support bundle files with SupportBundleCollection CRD.
+ SupportBundleCollection: true
+```
+
+A single Namespace (e.g., default) is created for saving the Secrets that are
+used to access the support bundle file server, and the permission to read Secrets
+in this Namespace is given to antrea-controller by modifying and applying the
+[RBAC file](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/externalnode/support-bundle-collection-rbac.yml).
+
+```yaml
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: antrea-read-secrets
+ namespace: default # Change the Namespace to where the Secret for file server's authentication credential is created.
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: antrea-secret-reader
+subjects:
+ - kind: ServiceAccount
+ name: antrea-controller
+ namespace: kube-system
+```
+
+## The SupportBundleCollection CRD
+
+SupportBundleCollection CRD is introduced to supplement the `antctl` command
+with three additional features:
+
+1. Allow users to collect support bundle files on external Nodes.
+2. Upload all the support bundle files into a user-provided SFTP Server.
+3. Support tracking status of a SupportBundleCollection CR.
+
+## Usage examples
+
+### Running antctl commands
+
+Please refer to the [antctl user guide section](antctl.md#collecting-support-information).
+Note: `antctl supportbundle` only works on collecting the support bundles from
+Antrea Controller and Antrea Agent that is running on a K8s Node, but it does
+not work for the Agent on an external Node.
+
+### Applying SupportBundleCollection CR
+
+In this section, we will create two SupportBundleCollection CRs for K8s Nodes
+and external Nodes. Note, it is supported to specify Nodes/ExternalNodes by their
+names or by matching their labels in a SupportBundleCollection CR.
+
+Assume we have a cluster with Nodes named "worker1" and "worker2". In addition,
+we have set up two external Nodes named "vm1" and "vm2" in the "vm-ns" Namespace
+by following the instruction of the [VM installation guide](external-node.md#install-antrea-agent-on-vm).
+In addition, an SFTP server needs to be provided in advance to collect the bundle.
+You can host the SFTP server by applying YAML `hack/externalnode/sftp-deployment.yml`
+or deploy one by yourself.
+
+A Secret needs to be created in advance with the username and password of the SFTP
+Server. The Secret will be referred as `authSecret` in the following YAML examples.
+
+```bash
+# Set username and password with `--from-literal=username='foo' --from-literal=password='pass'`
+# if the sftp server is deployed with sftp-deployment.yml
+kubectl create secret generic support-bundle-secret --from-literal=username='your-sftp-username' --from-literal=password='your-sftp-password'
+```
+
+Then we can apply the following YAML files. The first one is to collect support
+bundle on K8s Nodes "worker1" and "worker2": "worker1" is specified by the name,
+and "worker2" is specified by matching label "role: workers". The second one is to
+collect support bundle on external Nodes "vm1" and "vm2" in Namespace "vm-ns":
+"vm1" is specified by the name, and "vm2" is specified by matching label "role: vms".
+
+```bash
+cat << EOF | kubectl apply -f -
+apiVersion: crd.antrea.io/v1alpha1
+kind: SupportBundleCollection
+metadata:
+ name: support-bundle-for-nodes
+spec:
+ nodes: # All Nodes will be selected if both nodeNames and matchLabels are empty
+ nodeNames:
+ - worker1
+ matchLabels:
+ role: workers
+ expirationMinutes: 10 # expirationMinutes is the requested duration of validity of the SupportBundleCollection. A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+ sinceTime: 2h # Collect the logs in the latest 2 hours. Collect all available logs if the time is not set.
+ fileServer:
+ url: sftp://yourtestdomain.com:22/root/test
+ authentication:
+ authType: "BasicAuthentication"
+ authSecret:
+ name: support-bundle-secret
+ namespace: default # antrea-controller must be given the permission to read Secrets in "default" Namespace.
+EOF
+```
+
+```bash
+cat << EOF | kubectl apply -f -
+apiVersion: crd.antrea.io/v1alpha1
+kind: SupportBundleCollection
+metadata:
+ name: support-bundle-for-vms
+spec:
+ externalNodes: # All ExternalNodes in the Namespace will be selected if both nodeNames and matchLabels are empty
+ nodeNames:
+ - vm1
+ nodeSelector:
+ matchLabels:
+ role: vms
+ namespace: vm-ns # namespace is mandatory when collecting support bundle from external Nodes.
+ fileServer:
+ url: yourtestdomain.com:22/root/test # Scheme sftp can be omitted. The url of "$controlplane_node_ip:30010/upload" is used if deployed with sftp-deployment.yml.
+ authentication:
+ authType: "BasicAuthentication"
+ authSecret:
+ name: support-bundle-secret
+ namespace: default # antrea-controller must be given the permission to read Secrets in "default" Namespace.
+EOF
+```
+
+For more information about the supported fields in a "SupportBundleCollection"
+CR, please refer to the [CRD definition](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/charts/antrea/crds/supportbundlecollection.yaml)
+
+You can check the status of `SupportBundleCollection` by running command
+`kubectl get supportbundlecollections [NAME] -ojson`.
+The following example shows a successful realization of `SupportBundleCollection`.
+`desiredNodes` shows the expected number of Nodes/ExternalNodes to collect with
+this request, while `collectedNodes` shows the number of Nodes/ExternalNodes
+which have already uploaded bundle files to the target file server. If the
+collection completes successfully, `collectedNodes` and `desiredNodes`should
+have an equal value which should match the number of Nodes/ExternalNodes you
+want to collect support bundle.
+
+If the following two conditions are presented, it means a bundle collection
+succeeded,
+
+1. "Completed" is true
+2. "CollectionFailure" is false.
+
+If any expected Node/ExternalNode failed to upload the bundle files in the
+required time, the "CollectionFailure" condition will be set to true.
+
+```bash
+$ kubectl get supportbundlecollections support-bundle-for-nodes -ojson
+
+...
+ "status": {
+ "collectedNodes": 1,
+ "conditions": [
+ {
+ "lastTransitionTime": "2022-12-08T06:49:35Z",
+ "status": "True",
+ "type": "Started"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:41Z",
+ "status": "True",
+ "type": "BundleCollected"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:35Z",
+ "status": "False",
+ "type": "CollectionFailure"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:41Z",
+ "status": "True",
+ "type": "Completed"
+ }
+ ],
+ "desiredNodes": 1
+ }
+```
+
+The collected bundle should include three tarballs. To access these files, you
+can download the files from the SFTP server `yourtestdomain.com`. There will be
+two tarballs for `support-bundle-for-nodes`: "support-bundle-for-nodes_worker1.tar.gz"
+and "support-bundle-for-nodes_worker2.tar.gz", and two for `support-bundle-for-vms`:
+"support-bundle-for-vms_vm1.tar.gz" and "support-bundle-for-vms_vm2.tar.gz", in
+the `/root/test` folder. Run the `tar xvf $TARBALL_NAME` command to extract the
+files from the tarballs.
+
+## List of collected items
+
+Depending on the methods you use to collect the support bundle, the contents in
+the bundle may differ. The following table shows the differences.
+
+We use `agent`,`controller`, `outside` to represent running command
+`antctl supportbundle` in Antrea Agent, Antrea Controller, out-of-cluster
+respectively. Also, we use `Node` and `ExternalNode` to represent
+"create SupportBundleCollection CR for Nodes" and "create SupportBundleCollection
+CR for external Nodes".
+
+| Collected Item | Supported Collecting Method | Explanation |
+|-----------------------------|----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Antrea Agent Log | `agent`, `outside`, `Node`, `ExternalNode` | Antrea Agent log files |
+| Antrea Controller Log | `controller`, `outside` | Antrea Controller log files |
+| iptables (Linux Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip6tables-save` and `iptable-save` with counters |
+| OVS Ports | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ovs-ofctl dump-ports-desc` |
+| NetworkPolicy Resources | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get appliedtogroups` and `antctl get addressgroups` commands |
+| Heap Pprof | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | Output of [`pprof.WriteHeapProfile`](https://pkg.go.dev/runtime/pprof#WriteHeapProfile) |
+| HNSResources (Windows Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `Get-HNSNetwork` and `Get-HNSEndpoint` commands |
+| Antrea Agent Info | `agent`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get agentinfo` |
+| Antrea Controller Info | `controller`, `outside` | YAML output of `antctl get controllerinfo` |
+| IP Address Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip address` command on Linux or `ipconfig /all` command on Windows |
+| IP Route Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip route` on Linux or `route print` on Windows |
+| IP Link Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip link` on Linux or `Get-NetAdapter` on Windows |
+| Cluster Information | `outside` | Dump of resources in the cluster, including: 1. all Pods, Deployments, Replicasets and Daemonsets in all Namespaces with any resourceVersion. 2. all Nodes with any resourceVersion. 3. all ConfigMaps in all Namespaces with any resourceVersion and label `app=antrea`. |
+| Memberlist State | `agent`, `outside` | YAML output of `antctl get memberlist` |
+
+## Limitations
+
+Only SFTP basic authentication is supported for SupportBundleCollection.
+Other authentication methods will be added in the future.
diff --git a/content/docs/v2.2.0-alpha.2/docs/traceflow-guide.md b/content/docs/v2.2.0-alpha.2/docs/traceflow-guide.md
new file mode 100644
index 00000000..bf4c2c06
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/traceflow-guide.md
@@ -0,0 +1,180 @@
+# Traceflow User Guide
+
+Antrea supports using Traceflow for network diagnosis. It can inject a packet
+into OVS on a Node and trace the forwarding path of the packet across Nodes, and
+it can also trace a matched packet of real traffic from or to a Pod. In either
+case, a Traceflow operation is triggered by a Traceflow CRD which specifies the
+type of Traceflow, the source and destination of the packet to trace, and the
+headers of the packet. And the Traceflow results will be populated to the
+`status` field of the Traceflow CRD, which include the observations of the trace
+packet at various observations points in the forwarding path. Besides creating
+the Traceflow CRD using kubectl, users can also start a Traceflow using
+`antctl`, or from the [Antrea web UI](https://github.com/antrea-io/antrea-ui).
+When using the Antrea web UI, the Traceflow results can be visualized using a
+graph.
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Start a New Traceflow](#start-a-new-traceflow)
+ - [Using kubectl and YAML file (IPv4)](#using-kubectl-and-yaml-file-ipv4)
+ - [Using kubectl and YAML file (IPv6)](#using-kubectl-and-yaml-file-ipv6)
+ - [Live-traffic Traceflow](#live-traffic-traceflow)
+ - [Using antctl](#using-antctl)
+ - [Using the Antrea web UI](#using-the-antrea-web-ui)
+- [View Traceflow Result and Graph](#view-traceflow-result-and-graph)
+- [RBAC](#rbac)
+
+
+## Prerequisites
+
+The Traceflow feature is enabled by default since Antrea version v0.11. In order
+to use a Service as the destination in traces, Antrea Proxy (also enabled by
+default since v0.11) is required.
+
+## Start a New Traceflow
+
+You can choose to use `kubectl` together with a YAML file, the `antctl traceflow`
+command, or the Antrea UI to start a new trace.
+
+When starting a new trace, you can provide the following information which will be used to build the trace packet:
+
+* source Pod
+* destination Pod, Service or destination IP address
+* transport protocol (TCP/UDP/ICMP)
+* transport ports
+
+### Using kubectl and YAML file (IPv4)
+
+You can start a new trace by creating Traceflow CRD via kubectl and a YAML file which contains the essential
+configuration of Traceflow CRD. An example YAML file of Traceflow CRD might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test
+spec:
+ source:
+ namespace: default
+ pod: tcp-sts-0
+ destination:
+ namespace: default
+ pod: tcp-sts-2
+ # destination can also be an IP address ('ip' field) or a Service name ('service' field); the 3 choices are mutually exclusive.
+ packet:
+ ipHeader: # If ipHeader/ipv6Header is not set, the default value is IPv4+ICMP.
+ protocol: 6 # Protocol here can be 6 (TCP), 17 (UDP) or 1 (ICMP), default value is 1 (ICMP)
+ transportHeader:
+ tcp:
+ srcPort: 10000 # Source port for TCP/UDP. If omitted, a random port will be used.
+ dstPort: 80 # Destination port needs to be set when Protocol is TCP/UDP.
+ flags: 2 # Construct a SYN packet: 2 is also the default value when the flags field is omitted.
+```
+
+The CRD above starts a new trace from port 10000 of source Pod named `tcp-sts-0` to port 80
+of destination Pod named `tcp-sts-2` using TCP protocol.
+
+### Using kubectl and YAML file (IPv6)
+
+Antrea Traceflow supports IPv6 traffic. An example YAML file of Traceflow CRD might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test-ipv6
+spec:
+ source:
+ namespace: default
+ pod: tcp-sts-0
+ destination:
+ namespace: default
+ pod: tcp-sts-2
+ # destination can also be an IPv6 address ('ip' field) or a Service name ('service' field); the 3 choices are mutually exclusive.
+ packet:
+ ipv6Header: # ipv6Header MUST be set to run Traceflow in IPv6, and ipHeader will be ignored when ipv6Header set.
+ nextHeader: 58 # Protocol here can be 6 (TCP), 17 (UDP) or 58 (ICMPv6), default value is 58 (ICMPv6)
+```
+
+The CRD above starts a new trace from source Pod named `tcp-sts-0` to destination Pod named `tcp-sts-2` using ICMPv6
+protocol.
+
+### Live-traffic Traceflow
+
+Starting from Antrea version 1.0.0, you can trace a packet of the real traffic
+from or to a Pod, instead of the injected packet. To start such a Traceflow, add
+`liveTraffic: true` to the Traceflow `spec`. Then, the first packet of the first
+connection that matches the Traceflow spec will be traced (connections opened
+before the Traceflow was initiated will be ignored), and the headers of the
+packet will be captured and reported in the `status` field of the Traceflow CRD,
+in addition to the observations. A live-traffic Traceflow requires only one of
+`source` and `destination` to be specified. When `source` or `destination` is
+not specified, it means that a packet can be captured regardless of its source
+or destination. One of `source` and `destination` must be a Pod. When `source`
+is not specified, or is an IP address, only the receiver Node will capture the
+packet and trace it after the L2 forwarding observation point. This means that
+even if the source of the packet is on the same Node as the destination, no
+observations on the sending path will be reported for the Traceflow. By default,
+a live-traffic Traceflow (the same as a normal Traceflow) will timeout in 20
+seconds, and if no matched packet captured before the timeout the Traceflow
+will fail. But you can specify a different timeout value, by adding
+`timeout: ` to the Traceflow `spec`.
+
+In some cases, it might be useful to capture the packets dropped by
+NetworkPolicies (inc. K8s NetworkPolicies or Antrea-native policies). You can
+add `droppedOnly: true` to the live-traffic Traceflow `spec`, then the first
+packet that matches the Traceflow spec and is dropped by a NetworkPolicy will
+be captured and traced.
+
+The following example is a live-traffic Traceflow that captures a dropped UDP
+packet to UDP port 1234 of Pod udp-server, within 1 minute:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test
+spec:
+ liveTraffic: true
+ droppedOnly: true
+ destination:
+ namespace: default
+ pod: udp-server
+ packet:
+ transportHeader:
+ udp:
+ dstPort: 1234
+ timeout: 60
+```
+
+### Using antctl
+
+Please refer to the corresponding [antctl page](antctl.md#traceflow).
+
+### Using the Antrea web UI
+
+Please refer to the [Antrea UI documentation](https://github.com/antrea-io/antrea-ui)
+for installation instructions. Once you can access the UI in your browser,
+navigate to the `Traceflow` page.
+
+## View Traceflow Result and Graph
+
+You can always view Traceflow result directly via Traceflow CRD status and see if the packet is successfully delivered
+or somehow dropped by certain packet-processing stage. Antrea also provides a more user-friendly way by showing the
+Traceflow result via a trace graph when using the Antrea UI.
+
+## RBAC
+
+Traceflow CRDs are meant for admins to troubleshoot and diagnose the network
+by injecting a packet from a source workload to a destination workload. Thus,
+access to manage these CRDs must be granted to subjects which
+have the authority to perform these diagnostic actions. On cluster
+initialization, Antrea grants the permissions to edit these CRDs with `admin`
+and the `edit` ClusterRole. In addition to this, Antrea also grants the
+permission to view these CRDs with the `view` ClusterRole. Cluster admins can
+therefore grant these ClusterRoles to any subject who may be responsible to
+troubleshoot the network. The admins may also decide to share the `view`
+ClusterRole to a wider range of subjects to allow them to read the traceflows
+that are active in the cluster.
diff --git a/content/docs/v2.2.0-alpha.2/docs/traffic-control.md b/content/docs/v2.2.0-alpha.2/docs/traffic-control.md
new file mode 100644
index 00000000..da36e7a8
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/traffic-control.md
@@ -0,0 +1,278 @@
+# Traffic Control With Antrea
+
+## Table of Contents
+
+
+- [What is TrafficControl?](#what-is-trafficcontrol)
+- [Prerequisites](#prerequisites)
+- [The TrafficControl resource](#the-trafficcontrol-resource)
+ - [AppliedTo](#appliedto)
+ - [Direction](#direction)
+ - [Action](#action)
+ - [TargetPort](#targetport)
+ - [ReturnPort](#returnport)
+- [Examples](#examples)
+ - [Mirroring all traffic to remote analyzer](#mirroring-all-traffic-to-remote-analyzer)
+ - [Redirecting specific traffic to local receiver](#redirecting-specific-traffic-to-local-receiver)
+- [What's next](#whats-next)
+
+
+## What is TrafficControl?
+
+`TrafficControl` is a CRD API that manages and manipulates the transmission of
+Pod traffic. It allows users to mirror or redirect specific traffic originating
+from specific Pods or destined for specific Pods to a local network device or a
+remote destination via a tunnel of various types. It provides full visibility
+into network traffic, including both north-south and east-west traffic.
+
+You may be interested in using this capability if any of the following apply:
+
+- You want to monitor network traffic passing in or out of a set of Pods for
+ purposes such as troubleshooting, intrusion detection, and so on.
+
+- You want to redirect network traffic passing in or out of a set of Pods to
+ applications that enforce policies, and reject traffic to prevent intrusion.
+
+This guide demonstrates how to configure `TrafficControl` to achieve the above
+goals.
+
+## Prerequisites
+
+TrafficControl was introduced in v1.7 as an alpha feature. A feature gate,
+`TrafficControl` must be enabled on the antrea-agent in the `antrea-config`
+ConfigMap for the feature to work, like the following:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ TrafficControl: true
+```
+
+## The TrafficControl resource
+
+A TrafficControl in Kubernetes is a REST object. Like all the REST objects, you
+can POST a TrafficControl definition to the API server to create a new instance.
+For example, supposing you have a set of Pods which contain a label `app=web`,
+the following specification creates a new TrafficControl object named
+"mirror-web-app", which mirrors all traffic from or to any Pod with the
+`app=web` label and send them to a receiver running on "10.0.10.2" encapsulated
+within a VXLAN tunnel:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: mirror-web-app
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: web
+ direction: Both
+ action: Mirror
+ targetPort:
+ vxlan:
+ remoteIP: 10.0.10.2
+```
+
+### AppliedTo
+
+The `appliedTo` field specifies the grouping criteria of Pods to which the
+TrafficControl applies to. Pods can be selected cluster-wide using
+`podSelector`. If set with a `namespaceSelector`, all Pods from Namespaces
+selected by the `namespaceSelector` will be selected. Specific Pods from
+specific Namespaces can be selected by providing both a `podSelector` and a
+`namespaceSelector`. Empty `appliedTo` selects nothing. The field is mandatory.
+
+### Direction
+
+The `direction` field specifies the direction of traffic that should be matched.
+It can be `Ingress`, `Egress`, or `Both`.
+
+### Action
+
+The `action` field specifies which action should be taken for the traffic. It
+can be `Mirror` or `Redirect`. For the `Mirror` action, `targetPort` must be
+set to the port to which the traffic will be mirrored. For the `Redirect`
+action, both `targetPort` and `returnPort` need to be specified, the latter of
+which represents the port from which the traffic could be sent back to OVS and
+be forwarded to its original destination. Once redirected, a packet should be
+either dropped or sent back to OVS without modification, otherwise it would lead
+to undefined behavior.
+
+### TargetPort
+
+The `targetPort` field specifies the port to which the traffic should be
+redirected or mirrored. There are five kinds of ports that can be used to
+receive mirrored traffic:
+
+**ovsInternal**: This specifies an OVS internal port on all Nodes. A Pod's
+traffic will be redirected or mirrored to the OVS internal port on the same Node
+that hosts the Pod. The port doesn't need to exist in advance, Antrea will
+create the port if it doesn't exist. To use an OVS internal port, the `name` of
+the port must be provided:
+
+```yaml
+ovsInternal:
+ name: tap0
+```
+
+**device**: This specifies a network device on all Nodes. A Pod's traffic will
+be redirected or mirrored to the network device on the same Node that hosts the
+Pod. The network device must exist on all Nodes and Antrea will attach it to the
+OVS bridge if not already attached. To use a network device, the `name` of the
+device must be provided:
+
+```yaml
+device:
+ name: eno2
+```
+
+**geneve**: This specifies a remote destination for a GENEVE tunnel. All
+selected Pods' traffic will be redirected or mirrored to the destination via
+a GENEVE tunnel. The `remoteIP` field must be provided to specify the IP address
+of the destination. Optionally, the `destinationPort` field could be used to
+specify the UDP destination port of the tunnel, or 6081 will be used by default.
+If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified
+to an integer in the range 0-16,777,215:
+
+```yaml
+geneve:
+ remoteIP: 10.0.10.2
+ destinationPort: 6081
+ vni: 1
+```
+
+**vxlan**: This specifies a remote destination for a VXLAN tunnel. All
+selected Pods' traffic will be redirected or mirrored to the destination via
+a VXLAN tunnel. The `remoteIP` field must be provided to specify the IP address
+of the destination. Optionally, the `destinationPort` field could be used to
+specify the UDP destination port of the tunnel, or 4789 will be used by default.
+If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified
+to an integer in the range 0-16,777,215:
+
+```yaml
+vxlan:
+ remoteIP: 10.0.10.2
+ destinationPort: 4789
+ vni: 1
+```
+
+**gre**: This specifies a remote destination for a GRE tunnel. All selected
+Pods' traffic will be redirected or mirrored to the destination via a GRE
+tunnel. The `remoteIP` field must be provided to specify the IP address of the
+destination. If GRE key is desired, the `key` field can be specified to an
+integer in the range 0-4,294,967,295:
+
+```yaml
+gre:
+ remoteIP: 10.0.10.2
+ key: 1
+```
+
+**erspan**: This specifies a remote destination for an ERSPAN tunnel. All
+selected Pods' traffic will be mirrored to the destination via an ERSPAN tunnel.
+The `remoteIP` field must be provided to specify the IP address of the
+destination. If ERSPAN session ID is desired, the `sessionID` field can be
+specified to an integer in the range 0-1,023. The `version` field must be
+provided to specify the ERSPAN version: 1 for version 1 (type II), or 2 for
+version 2 (type III).
+
+For version 1, the `index` field can be specified to associate with the ERSPAN
+traffic's source port and direction. An example of version 1 might look like
+this:
+
+```yaml
+erspan:
+ remoteIP: 10.0.10.2
+ sessionID: 1
+ version: 1
+ index: 1
+```
+
+For version 2, the `dir` field can be specified to indicate the mirrored
+traffic's direction: 0 for ingress traffic, 1 for egress traffic. The
+`hardwareID` field can be specified as an unique identifier of an ERSPAN v2
+engine. An example of version 2 might look like this:
+
+```yaml
+erspan:
+ remoteIP: 10.0.10.2
+ sessionID: 1
+ version: 2
+ dir: 0
+ hardwareID: 4
+```
+
+### ReturnPort
+
+The `returnPort` field should only be set when the `action` is `Redirect`. It is
+similar to the `targetPort` field, but meant for specifying the port from which
+the traffic will be sent back to OVS and be forwarded to its original
+destination.
+
+## Examples
+
+### Mirroring all traffic to remote analyzer
+
+In this example, we will mirror all Pods' traffic and send them to a remote
+destination via a GENEVE tunnel:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: mirror-all-to-remote
+spec:
+ appliedTo:
+ podSelector: {}
+ direction: Both
+ action: Mirror
+ targetPort:
+ geneve:
+ remoteIP: 10.0.10.2
+```
+
+### Redirecting specific traffic to local receiver
+
+In this example, we will redirect traffic of all Pods in the Namespace `prod` to
+OVS internal ports named `tap0` configured on Nodes that these Pods run on.
+The `returnPort` configuration means, if the traffic is sent back to OVS from
+OVS internal ports named `tap1`, it will be forwarded to its original
+destination. Therefore, if an intrusion prevention system or a network firewall
+is configured to capture and forward traffic between `tap0` and `tap1`, it can
+actively scan forwarded network traffic for malicious activities and known
+attack patterns, and drop the traffic determined to be malicious.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: redirect-prod-to-local
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ direction: Both
+ action: Redirect
+ targetPort:
+ ovsInternal:
+ name: tap0
+ returnPort:
+ ovsInternal:
+ name: tap1
+```
+
+## What's next
+
+With the `TrafficControl` capability, Antrea can be used with threat detection
+engines to provide network-based IDS/IPS to Pods. We provide a reference
+cookbook on how to implement IDS using Suricata. For more information, refer to
+the [cookbook](cookbooks/ids).
diff --git a/content/docs/v2.2.0-alpha.2/docs/traffic-encryption.md b/content/docs/v2.2.0-alpha.2/docs/traffic-encryption.md
new file mode 100644
index 00000000..1b03c8e5
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/traffic-encryption.md
@@ -0,0 +1,129 @@
+# Traffic Encryption with Antrea
+
+Antrea supports encrypting traffic across Linux Nodes with IPsec ESP or
+WireGuard. Traffic encryption is not supported on Windows Nodes yet.
+
+## IPsec
+
+IPsec encryption works for all tunnel types supported by OVS including Geneve,
+GRE, VXLAN, and STT tunnel.
+
+Note that GRE is not supported for IPv6 clusters (IPv6-only or dual-stack
+clusters). For such clusters, please choose a different tunnel type such as
+Geneve or VXLAN.
+
+### Prerequisites
+
+IPsec requires a set of Linux kernel modules. Check the required kernel modules
+listed in the [strongSwan documentation](https://wiki.strongswan.org/projects/strongswan/wiki/KernelModules).
+Make sure the required kernel modules are loaded on the Kubernetes Nodes before
+deploying Antrea with IPsec encryption enabled.
+
+If you want to enable IPsec with Geneve, please make sure [this commit](https://github.com/torvalds/linux/commit/34beb21594519ce64a55a498c2fe7d567bc1ca20)
+is included in the kernel. For Ubuntu 18.04, kernel version should be at least
+`4.15.0-128`. For Ubuntu 20.04, kernel version should be at least `5.4.70`.
+
+### Antrea installation
+
+You can simply apply the [Antrea IPsec deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea-ipsec.yml)
+to deploy Antrea with IPsec encryption enabled. To deploy a released version of
+Antrea, pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases).
+Note that IPsec support was added in release 0.3.0, which means you can not
+pick a release older than 0.3.0. For any given release `` (e.g. `v0.3.0`),
+get the Antrea IPsec deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea-ipsec.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+IPsec deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-ipsec.yml
+```
+
+Antrea leverages strongSwan as the IKE daemon, and supports using pre-shared key
+(PSK) for IKE authentication. The deployment yaml creates a Kubernetes Secret
+`antrea-ipsec` to store the PSK string. For security consideration, we recommend
+to change the default PSK string in the yaml file. You can edit the yaml file,
+and update the `psk` field in the `antrea-ipsec` Secret spec to any string you
+want to use. Check the `antrea-ipsec` Secret spec below:
+
+```yaml
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: antrea-ipsec
+ namespace: kube-system
+stringData:
+ psk: changeme
+type: Opaque
+```
+
+After updating the PSK value, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea-ipsec.yml
+```
+
+By default, the deployment yaml uses GRE as the tunnel type, which you can
+change by editing the file. You will need to change the tunnel type to another
+one if your cluster supports IPv6.
+
+## WireGuard
+
+Antrea can leverage [WireGuard](https://www.wireguard.com) to encrypt Pod traffic
+between Nodes. WireGuard encryption works like another tunnel type, and when it
+is enabled the `tunnelType` parameter in the `antrea-agent` configuration file
+will be ignored.
+
+### Prerequisites
+
+WireGuard encryption requires the `wireguard` kernel module be present on the
+Kubernetes Nodes. `wireguard` module is part of mainline kernel since Linux 5.6.
+Or, you can compile the module from source code with a kernel version >= 3.10.
+[This WireGuard installation guide](https://www.wireguard.com/install) documents how to
+install WireGuard together with the kernel module on various operating systems.
+
+### Antrea installation
+
+First, download the [Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/yamls/antrea.yml). To deploy
+a released version of Antrea, pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases).
+Note that WireGuard support was added in release 1.3.0, which means you can not
+pick a release older than 1.3.0. For any given release `` (e.g. `v1.3.0`),
+get the Antrea deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+To enable WireGuard encryption, the `trafficEncryptionMode` config parameter of
+`antrea-agent` to `wireGuard`. The `trafficEncryptionMode` config parameter is
+defined in `antrea-agent.conf` of `antrea` ConfigMap in the Antrea deployment
+yaml:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ trafficEncryptionMode: wireGuard
+```
+
+After saving the yaml file change, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea.yml
+```
diff --git a/content/docs/v2.2.0-alpha.2/docs/troubleshooting.md b/content/docs/v2.2.0-alpha.2/docs/troubleshooting.md
new file mode 100644
index 00000000..78689738
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/troubleshooting.md
@@ -0,0 +1,302 @@
+# Troubleshooting
+
+## Table of Contents
+
+
+- [Looking at the Antrea logs](#looking-at-the-antrea-logs)
+- [Accessing the antrea-controller API](#accessing-the-antrea-controller-api)
+ - [Using antctl](#using-antctl)
+ - [Using kubectl proxy](#using-kubectl-proxy)
+ - [Using antctl proxy](#using-antctl-proxy)
+ - [Directly accessing the antrea-controller API](#directly-accessing-the-antrea-controller-api)
+- [Accessing the antrea-agent API](#accessing-the-antrea-agent-api)
+ - [Using antctl](#using-antctl-1)
+ - [Using antctl proxy](#using-antctl-proxy-1)
+ - [Directly accessing the antrea-agent API](#directly-accessing-the-antrea-agent-api)
+- [Accessing the flow-aggregator API](#accessing-the-flow-aggregator-api)
+ - [Using antctl](#using-antctl-2)
+ - [Directly accessing the flow-aggregator API](#directly-accessing-the-flow-aggregator-api)
+- [Troubleshooting Open vSwitch](#troubleshooting-open-vswitch)
+- [Troubleshooting with antctl](#troubleshooting-with-antctl)
+- [Profiling Antrea components](#profiling-antrea-components)
+- [Ask your questions to the Antrea community](#ask-your-questions-to-the-antrea-community)
+
+
+## Looking at the Antrea logs
+
+You can inspect the `antrea-controller` logs in the `antrea-controller` Pod by
+running this `kubectl` command:
+
+```bash
+kubectl logs -n kube-system
+```
+
+To check the logs of the `antrea-agent`, `antrea-ovs`, and `antrea-ipsec`
+containers in an `antrea-agent` Pod, run command:
+
+```bash
+kubectl logs -n kube-system -c [antrea-agent|antrea-ovs|antrea-ipsec]
+```
+
+To check the OVS daemon logs (e.g. if the `antrea-ovs` container logs indicate
+that one of the OVS daemons generated an error), you can use `kubectl exec`:
+
+```bash
+kubectl exec -n kube-system -c antrea-ovs -- tail /var/log/openvswitch/.log
+```
+
+The `antrea-controller` Pod and the list of `antrea-agent` Pods, along with the
+Nodes on which the Pods are scheduled, can be returned by command:
+
+```bash
+kubectl get pods -n kube-system -l app=antrea -o wide
+```
+
+Logs of `antrea-controller`, `antrea-agent`, OVS and strongSwan daemons are also
+stored in the filesystem of the Node (i.e. the Node on which the
+`antrea-controller` or `antrea-agent` Pod is scheduled).
+
+- `antrea-controller` logs are stored in directory: `/var/log/antrea` (on the
+Node where the `antrea-controller` Pod is scheduled.
+- `antrea-agent` logs are stored in directory: `/var/log/antrea` (on the Node
+where the `antrea-agent` Pod is scheduled).
+- Logs of the OVS daemons - `ovs-vswitchd`, `ovsdb-server`, `ovs-monitor-ipsec` -
+are stored in directory: `/var/log/antrea/openvswitch` (on the Node where the
+`antrea-agent` Pod is scheduled).
+- strongSwan daemon logs are stored in directory: `/var/log/antrea/strongswan`
+(on the Node where the `antrea-agent` Pod is scheduled).
+
+To increase the log level for the `antrea-agent` and the `antrea-controller`, you
+can edit the `--v=0` arg in the Antrea manifest to a desired level.
+Alternatively, you can generate an Antrea manifest with increased log level of
+4 (maximum debug level) using `generate_manifest.sh`:
+
+```bash
+hack/generate-manifest.sh --mode dev --verbose-log
+```
+
+## Accessing the antrea-controller API
+
+antrea-controller runs as a Deployment, exposes its API via a Service and
+registers an APIService to aggregate into the Kubernetes API. To access the
+antrea-controller API, you need to know its address and have the credentials
+to access it. There are multiple ways in which you can access the API:
+
+### Using antctl
+
+Typically, `antctl` handles locating the Kubernetes API server and
+authentication when it runs in an environment with kubeconfig set up. Same as
+`kubectl`, `antctl` looks for a file named config in the $HOME/.kube directory.
+You can specify other kubeconfig files by setting the `--kubeconfig` flag.
+
+For example, you can view internal NetworkPolicy objects with this command:
+
+```bash
+antctl get networkpolicy
+```
+
+### Using kubectl proxy
+
+As the antrea-controller API is aggregated into the Kubernetes API, you can
+access it through the Kubernetes API using the appropriate URL paths. The
+following command runs `kubectl` in a mode where it acts as a reverse proxy for
+the Kubernetes API and handles authentication.
+
+```bash
+# Start the proxy in the background
+kubectl proxy &
+# Access the antrea-controller API path
+curl 127.0.0.1:8001/apis/controlplane.antrea.io
+```
+
+### Using antctl proxy
+
+Antctl supports running a reverse proxy (similar to the kubectl one) which
+enables access to the entire Antrea Controller API (not just aggregated API
+Services), but does not secure the TLS connection between the proxy and the
+Controller. Refer to the [antctl documentation](antctl.md#antctl-proxy) for more
+information.
+
+### Directly accessing the antrea-controller API
+
+If you want to directly access the antrea-controller API, you need to get its
+address and pass an authentication token when accessing it, like this:
+
+```bash
+# Get the antrea Service address
+ANTREA_SVC=$(kubectl get service antrea -n kube-system -o jsonpath='{.spec.clusterIP}')
+# Get the token value of antctl account, you can use any ServiceAccount that has permissions to antrea API.
+TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath="{.data.token}"|base64 --decode)
+# Access antrea API with TOKEN
+curl --insecure --header "Authorization: Bearer $TOKEN" https://$ANTREA_SVC/apis
+```
+
+## Accessing the antrea-agent API
+
+antrea-agent runs as a DaemonSet Pod on each Node and exposes its API via a
+local endpoint. There are two ways you can access it:
+
+### Using antctl
+
+To use `antctl` to access the antrea-agent API, you need to exec into the
+antrea-agent container first. `antctl` is embedded in the image so it can be
+used directly.
+
+For example, you can view the internal NetworkPolicy objects for a specific
+agent with this command:
+
+```bash
+# Get into the antrea-agent container
+kubectl exec -it -n kube-system -c antrea-agent -- bash
+# View the agent's NetworkPolicy
+antctl get networkpolicy
+```
+
+### Using antctl proxy
+
+Antctl supports running a reverse proxy (similar to the kubectl one) which
+enables access to the entire Antrea Agent API, but does not secure the TLS
+connection between the proxy and the Controller. Refer to the [antctl
+documentation](antctl.md#antctl-proxy) for more information.
+
+### Directly accessing the antrea-agent API
+
+If you want to directly access the antrea-agent API, you need to log into the
+Node that the antrea-agent runs on or exec into the antrea-agent container. Then
+access the local endpoint directly using the Bearer Token stored in the file
+system:
+
+```bash
+TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token)
+curl --insecure --header "Authorization: Bearer $TOKEN" https://127.0.0.1:10350/
+```
+
+Note that you can also access the antrea-agent API from outside the Node by
+using the authentication token of the `antctl` ServiceAccount:
+
+```bash
+# Get the token value of antctl account.
+TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath="{.data.token}"|base64 --decode)
+# Access antrea API with TOKEN
+curl --insecure --header "Authorization: Bearer $TOKEN" https://:10350/podinterfaces
+```
+
+However, in this case you will be limited to the endpoints that `antctl` is
+allowed to access, as defined
+[here](https://github.com/antrea-io/antrea/blob/v2.2.0-alpha.2/build/charts/antrea/templates/antctl/clusterrole.yaml).
+
+## Accessing the flow-aggregator API
+
+flow-aggregator runs as a Deployment and exposes its API via a local endpoint.
+There are two ways you can access it:
+
+### Using antctl
+
+To use `antctl` to access the flow-aggregator API, you need to exec into the
+flow-aggregator container first. `antctl` is embedded in the image so it can be
+used directly.
+
+For example, you can dump the flow records with this command:
+
+```bash
+# Get into the flow-aggregator container
+kubectl exec -it -n flow-aggregator -- bash
+# View the flow records
+antctl get flowrecords
+```
+
+### Directly accessing the flow-aggregator API
+
+If you want to directly access the flow-aggregator API, you need to exec into
+the flow-aggregator container. Then access the local endpoint directly using the
+Bearer Token stored in the file system:
+
+```bash
+TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token)
+curl --insecure --header "Authorization: Bearer $TOKEN" https://127.0.0.1:10348/
+```
+
+## Troubleshooting Open vSwitch
+
+OVS daemons (`ovsdb-server` and `ovs-vswitchd`) run inside the `antrea-ovs`
+container of the `antrea-agent` Pod. You can use `kubectl exec` to execute OVS
+command line tools (e.g. `ovs-vsctl`, `ovs-ofctl`, `ovs-appctl`) in the
+container, for example:
+
+```bash
+kubectl exec -n kube-system -c antrea-ovs -- ovs-vsctl show
+```
+
+By default the host directory `/var/run/antrea/openvswitch/` is mounted to
+`/var/run/openvswitch/` of the `antrea-ovs` container and is used as the parent
+directory of the OVS UNIX domain sockets and configuration database file.
+Therefore, you may execute some OVS command line tools (inc. `ovs-vsctl` and
+`ovs-ofctl`) from a Kubernetes Node - assuming they are installed on the Node -
+by specifying the socket file path explicitly, for example:
+
+```bash
+ovs-vsctl --db unix:/var/run/antrea/openvswitch/db.sock show
+ovs-ofctl show unix:/var/run/antrea/openvswitch/br-int.mgmt
+```
+
+Commands to check basic OVS and OpenFlow information include:
+
+- `ovs-vsctl show`: dump OVS bridge and port configuration. Outputs of the
+command are like:
+
+```bash
+f06768ee-17ec-4abb-a971-b3b76abc8cda
+ Bridge br-int
+ datapath_type: system
+ Port coredns--e526c8
+ Interface coredns--e526c8
+ Port antrea-tun0
+ Interface antrea-tun0
+ type: geneve
+ options: {key=flow, remote_ip=flow}
+ Port antrea-gw0
+ Interface antrea-gw0
+ type: internal
+ ovs_version: "2.17.7"
+```
+
+- `ovs-ofctl show br-int`: show OpenFlow information of the OVS bridge.
+- `ovs-ofctl dump-flows br-int`: dump OpenFlow entries of the OVS bridge.
+- `ovs-ofctl dump-ports br-int`: dump traffic statistics of the OVS ports.
+
+For more information on the usage of the OVS CLI tools, check the
+[Open vSwitch Manpages](https://www.openvswitch.org/support/dist-docs).
+
+## Troubleshooting with antctl
+
+`antctl` provides some useful commands to troubleshoot Antrea Controller and
+Agent, which can print the runtime information of `antrea-controller` and
+`antrea-agent`, dump NetworkPolicy objects, dump Pod network interface
+information on a Node, dump Antrea OVS flows, and perform OVS packet tracing.
+Refer to the [`antctl` guide](antctl.md#usage) to learn how to use these
+commands.
+
+## Profiling Antrea components
+
+The easiest way to profile the Antrea components is to use the Go
+[pprof](https://golang.org/pkg/net/http/pprof/) tool. Both the Antrea Agent and
+the Antrea Controller use the K8s apiserver library to serve their API, and this
+library enables the pprof HTTP server by default. In order to access it without
+having to worry about authentication, you can use the antctl proxy function.
+
+For example, this is what you would do to look at a 30-second CPU profile for
+the Antrea Controller:
+
+```bash
+# Start the proxy in the background
+antctl proxy --controller&
+# Look at a 30-second CPU profile
+go tool pprof http://127.0.0.1:8001/debug/pprof/profile?seconds=30
+```
+
+## Ask your questions to the Antrea community
+
+If you are running into issues when running Antrea and you need help, ask your
+questions on [Github](https://github.com/antrea-io/antrea/issues/new/choose)
+or [reach out to us on Slack or during the Antrea office
+hours](../README.md#community).
diff --git a/content/docs/v2.2.0-alpha.2/docs/versioning.md b/content/docs/v2.2.0-alpha.2/docs/versioning.md
new file mode 100644
index 00000000..4578a72f
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/versioning.md
@@ -0,0 +1,459 @@
+# Antrea Versioning
+
+## Table of Contents
+
+
+- [Versioning scheme](#versioning-scheme)
+ - [Minor releases and patch releases](#minor-releases-and-patch-releases)
+ - [Feature stability](#feature-stability)
+- [Release cycle](#release-cycle)
+- [Antrea upgrade and supported version skew](#antrea-upgrade-and-supported-version-skew)
+- [Supported K8s versions](#supported-k8s-versions)
+- [Deprecation policies](#deprecation-policies)
+ - [Prometheus metrics deprecation policy](#prometheus-metrics-deprecation-policy)
+ - [APIs deprecation policy](#apis-deprecation-policy)
+- [Introducing new API resources](#introducing-new-api-resources)
+ - [Introducing new CRDs](#introducing-new-crds)
+- [Upgrading from Antrea v1 to Antrea v2](#upgrading-from-antrea-v1-to-antrea-v2)
+ - [Required upgrade steps because of API removal](#required-upgrade-steps-because-of-api-removal)
+ - [Case 1: upgrading from Antrea v1.13-v1.15 with kubectl](#case-1-upgrading-from-antrea-v113-v115-with-kubectl)
+ - [Case 2: upgrading from Antrea v1.13-v1.15 with Helm](#case-2-upgrading-from-antrea-v113-v115-with-helm)
+ - [Case 3: upgrading from Antrea v1.12 (or older) with kubectl](#case-3-upgrading-from-antrea-v112-or-older-with-kubectl)
+ - [Case 4: upgrading from Antrea v1.12 (or older) with Helm](#case-4-upgrading-from-antrea-v112-or-older-with-helm)
+ - [Other upgrade considerations](#other-upgrade-considerations)
+
+
+## Versioning scheme
+
+Antrea versions are expressed as `x.y.z`, where `x` is the major version, `y` is
+the minor version, and `z` is the patch version, following [Semantic Versioning]
+terminology.
+
+### Minor releases and patch releases
+
+Unlike minor releases, patch releases should not contain miscellaneous feature
+additions or improvements. No incompatibilities should ever be introduced
+between patch versions of the same minor version. API groups / versions must not
+be introduced or removed as part of patch releases.
+
+Patch releases are intended for important bug fixes to recent minor versions,
+such as addressing security vulnerabilities, fixes to problems preventing Antrea
+from being deployed & used successfully by a significant number of users, severe
+problems with no workaround, and blockers for products (including commercial
+products) which rely on Antrea.
+
+When it comes to dependencies, the following rules are observed between patch
+versions of the same Antrea minor versions:
+
+* the same minor OVS version should be used
+* the same minor version should be used for all Go dependencies, unless
+ updating to a new minor / major version is required for an important bug fix
+* for Antrea Docker images shipped as part of a patch release, the same version
+ must be used for the base Operating System (Linux distribution / Windows
+ server), unless an update is required to fix a critical bug. If important
+ updates are available for a given Operating System version (e.g. which address
+ security vulnerabilities), they should be included in Antrea patch releases.
+
+### Feature stability
+
+For every Antrea minor release, the stability level of supported features may be
+updated (from `Alpha` to `Beta` or from `Beta` to `GA`). Refer to the the
+[CHANGELOG] for information about feature stability level for each release. For
+features controlled by a feature gate, this information is also present in a
+more structured way in [feature-gates.md](feature-gates.md).
+
+## Release cycle
+
+New Antrea minor releases are currently shipped every 6 to 8 weeks. This fast
+release cadence enables us to ship new features quickly and frequently. It may
+change in the future. Compared to deploying the top-of-tree of the Antrea main
+branch, using a released version should provide more stability
+guarantees:
+
+* despite our CI pipelines, some bugs can sneak into the branch and be fixed
+ shortly after
+* merge conflicts can break the top-of-tree temporarily
+* some CI jobs are run periodically and not for every pull request before merge;
+ as much as possible we run the entire test suite for each release candidate
+
+Antrea maintains release branches for the two most recent minor releases
+(e.g. the `release-0.10` and `release-0.11` branches are maintained until Antrea
+0.12 is released). As part of this maintenance process, patch versions are
+released as frequently as needed, following these
+[guidelines](#minor-releases-and-patch-releases). With the current release
+cadence, this means that each minor release receives approximately 3 months of
+patch support. This may seem short, but was done on purpose to encourage users
+to upgrade Antrea often and avoid potential incompatibility issues. In the
+future, we may reduce our release cadence for minor releases and simultaneously
+increase the support window for each release.
+
+## Antrea upgrade and supported version skew
+
+Our goal is to support "graceful" upgrades for Antrea. By "graceful", we notably
+mean that there should be no significant disruption to data-plane connectivity
+nor to policy enforcement, beyond the necessary disruption incurred by the
+restart of individual components:
+
+* during the Antrea Controller restart, new policies will not be
+ processed. Because the Controller also runs the validation webhook for
+ [Antrea-native policies](antrea-network-policy.md), an attempt to create an
+ Antrea-native policy resource before the restart is complete may return an
+ error.
+* during an Antrea Agent restart, the Node's data-plane will be impacted: new
+ connections to & from the Node will not be possible, and existing connections
+ may break.
+
+In particular, it should be possible to upgrade Antrea without compromising
+enforcement of existing network policies for both new and existing Pods.
+
+In order to achieve this, the different Antrea components need to support
+version skew.
+
+* **Antrea Controller**: must be upgraded first
+* **Antrea Agent**: must not be newer than the **Antrea Controller**, and may be
+ up to 4 minor versions older
+* **Antctl**: must not be newer than the **Antrea Controller**, and may be up to
+ 4 minor versions older
+
+The supported version skew means that we only recommend Antrea upgrades to a new
+release up to 4 minor versions newer. For example, a cluster using 0.10 can be
+upgraded to one of 0.11, 0.12, 0.13 or 0.14, but we discourage direct upgrades
+to 0.15 and beyond. With the current release cadence, this provides a 6-month
+window of compatibility. If we reduce our release cadence in the future, we may
+revisit this policy as well.
+
+When directly applying a newer Antrea YAML manifest, as provided for each
+[release](https://github.com/antrea-io/antrea/releases), there is no
+guarantee that the Antrea Controller will be upgraded first. In practice, the
+Controller would be upgraded simultaneously with the first Agent(s) to be
+upgraded by the rolling update of the Agent DaemonSet. This may create some
+transient issues and compromise the "graceful" upgrade. For upgrade scenarios,
+we therefore recommend that you "split-up" the manifest to ensure that the
+Controller is upgraded first.
+
+## Supported K8s versions
+
+Each Antrea minor release should support [maintained K8s
+releases](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions)
+at the time of release (3 up to K8s 1.19, 4 after that). For example, at the
+time that Antrea 0.10 was released, the latest K8s version was 1.19; as a result
+we guarantee that 0.10 supports at least 1.19, 1.18 and 1.17 (in practice it
+also supports K8s 1.16).
+
+In addition, we strive to support the K8s versions used by default in
+cloud-managed K8s services ([EKS], [AKS] and [GKE] regular channel).
+
+## Deprecation policies
+
+### Prometheus metrics deprecation policy
+
+Antrea follows a similar policy as
+[Kubernetes](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#metric-lifecycle)
+for metrics deprecation.
+
+Alpha metrics have no stability guarantees; as such they can be modified or
+deleted at any time.
+
+Stable metrics are guaranteed to not change; specifically, stability means:
+
+* the metric itself will not be renamed
+* the type of metric will not be modified
+
+Eventually, even a stable metric can be deleted. In this case, the metric must
+be marked as deprecated first and the metric must stay deprecated for at least
+one minor release. The [CHANGELOG] must announce both metric deprecations and
+metric deletions.
+
+Before deprecation:
+
+```bash
+# HELP some_counter this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+After deprecation:
+
+```bash
+# HELP some_counter (Deprecated since 0.10.0) this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+In the future, we may introduce the same concept of [hidden
+metric](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#show-hidden-metrics)
+as K8s, as an additional part of the metric lifecycle.
+
+### APIs deprecation policy
+
+The Antrea APIs are built using K8s (they are a combination of
+CustomResourceDefinitions and aggregation layer APIServices) and we follow the
+same versioning scheme as the K8s APIs and the same [deprecation
+policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).
+
+Other than the most recent API versions in each track, older API versions must
+be supported after their announced deprecation for a duration of no less than:
+
+* GA: 12 months
+* Beta: 9 months
+* Alpha: N/A (can be removed immediately)
+
+This also applies to the `controlplane` API. In particular, introduction and
+removal of new versions for this API must respect the ["graceful" upgrade
+guarantee](#antrea-upgrade-and-supported-version-skew). The `controlplane` API
+(which is exposed using the aggregation layer) is often referred to as an
+"internal" API as it is used by the Antrea components to communicate with each
+other, and is usually not consumed by end users, e.g. cluster admins. However,
+this API may also be used for integration with other software, which is why we
+abide to the same deprecation policy as for other more "user-facing" APIs
+(e.g. Antrea-native policy CRDs).
+
+K8s has a [moratorium](https://github.com/kubernetes/kubernetes/issues/52185) on
+the removal of API object versions that have been persisted to storage. At the
+moment, none of Antrea APIServices (which use the aggregation layer) persist
+objects to storage. So the only objects we need to worry about are
+CustomResources, which are persisted by the K8s apiserver. For them, we adopt
+the following rules:
+
+* Alpha API versions may be removed at any time.
+* The [`deprecated` field] must be used for CRDs to indicate that a particular
+ version of the resource has been deprecated.
+* Beta and GA API versions must be supported after deprecation for the
+ respective durations stipulated above before they can be removed.
+* For deprecated Beta and GA API versions, a [conversion webhook] must be
+ provided along with each Antrea release, until the API version is removed
+ altogether.
+
+## Introducing new API resources
+
+### Introducing new CRDs
+
+Starting with Antrea v1.0, all Custom Resource Definitions (CRDs) for Antrea are
+defined in the same API group, `crd.antrea.io`, and all CRDs in this group are
+versioned individually. For example, at the time of writing this (v1.3 release
+timeframe), the Antrea CRDs include:
+
+* `ClusterGroup` in `crd.antrea.io/v1alpha2`
+* `ClusterGroup` in `crd.antrea.io/v1alpha3`
+* `Egress` in `crd.antrea.io/v1alpha2`
+* etc.
+
+Notice how 2 versions of `ClusterGroup` are supported: the one in
+`crd.antrea.io/v1alpha2` was introduced in v1.0, and is being deprecated as it
+was replaced by the one in `crd.antrea.io/v1alpha3`, introduced in v1.1.
+
+When introducing a new version of a CRD, [the API deprecation policy should be
+followed](#apis-deprecation-policy).
+
+When introducing a CRD, the following rule should be followed in order to avoid
+potential dependency cycles (and thus import cycles in Go): if the CRD depends on
+other object types spread across potentially different versions of
+`crd.antrea.io`, the CRD should be defined in a group version greater or equal
+to all of these versions. For example, if we want to introduce a new CRD which
+depends on types `v1alpha1.X` and `v1alpha2.Y`, it needs to go into `v1alpha2`
+or a more recent version of `crd.antrea.io`. As a rule it should probably go
+into `v1alpha2` unless it is closely related to other CRDs in a later version,
+in which case it can be defined alongside these CRDs, in order to avoid user
+confusion.
+
+If a new CRD does not have dependencies and is not closely related to an
+existing CRD, it will typically be defined in `v1alpha1`. In some rare cases, a
+CRD can be defined in `v1beta1` directly if there is enough confidence in the
+stability of the API.
+
+## Upgrading from Antrea v1 to Antrea v2
+
+### Required upgrade steps because of API removal
+
+Several CRD API Alpha versions were removed as part of the major version bump to
+Antrea v2, following the introduction of Beta versions in earlier minor
+releases. For more details, refer to this [list](api.md#previously-supported-crds).
+Because of these CRD version removals, you will need to make sure that you
+upgrade your existing CRs (for the affected CRDs) to the new (storage) version,
+*before* trying to upgrade to Antrea v2.0. You will also need to ensure that the
+`status.storedVersions` field for the affected CRDs is patched, with the old
+versions being removed. To simplify your upgrade process, we provide an antctl
+command which will automatically handle these steps for you: `antctl upgrade
+api-storage`.
+
+There are 3 possible scenarios:
+
+1) You never installed an Antrea minor version older than v1.13 in your
+ cluster. In this case you can directly upgrade to Antrea v2.0.
+2) Your cluster is currently running Antrea v1.13 through v1.15 (included), but
+ you previously ran an Antrea minor version older than v1.13. In this case,
+ you will need to run `antctl upgrade api-storage` prior to upgrading to
+ Antrea v2.0, regardless of whether you have created Antrea CRs or not.
+3) Your cluster is currently running an Antrea minor version older than
+ v1.13. In this case, you will first need to upgrade to Antrea v1.15.1, then
+ run `antctl upgrade api-storage`, before being able to upgrade to Antrea
+ v2.0.
+
+Even for scenario 1, feel free to run `antctl upgrade api-storage`. It is not
+strictly required, but it will not cause any harm either.
+
+In the sub-sections below, we give some detailed instructions for upgrade, based
+on your current Antrea version and installation method.
+
+For more information about CRD versioning, refer to the K8s
+[documentation](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/).
+The `antctl upgrade api-storage` command aims at automating that process for our
+users.
+
+#### Case 1: upgrading from Antrea v1.13-v1.15 with kubectl
+
+```text
+# Download antctl from release assets. You can use the antctl version that
+# matches your current Antrea version.
+$ antctl version
+antctlVersion: v1.13.4
+controllerVersion: v1.13.4
+
+# Even if you didn't create any CR using the CRD API versions which have been
+# removed in Antrea v2.0, you will still need to run the antctl command, or the
+# upgrade will fail.
+
+# Upgrade API storage for all CRDs.
+# For usage information run: antctl upgrade api-storage --help
+# In particular, the script will upgrade the system Tier CRs managed by Antrea
+# to the new storage version (v1beta1) if needed. If you never installed a minor
+# version of Antrea older than v1.13 in your cluster, you may not see any CRD
+# upgrade.
+$ antctl upgrade api-storage
+Skip upgrading CRD "externalnodes.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "trafficcontrols.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "externalentities.crd.antrea.io" since all stored objects are in the storage version.
+Skip upgrading CRD "supportbundlecollections.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreaagentinfos.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "ippools.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreacontrollerinfos.crd.antrea.io" since it only has one version.
+Upgrading 6 objects of CRD "tiers.crd.antrea.io".
+Successfully upgraded 6 objects of CRD "tiers.crd.antrea.io".
+
+# You can now upgrade to Antrea v2.0 successfully.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea.yml
+```
+
+#### Case 2: upgrading from Antrea v1.13-v1.15 with Helm
+
+```text
+# Download antctl from release assets. You can use the antctl version that
+# matches your current Antrea version.
+$ antctl version
+antctlVersion: v1.13.4
+controllerVersion: v1.13.4
+
+# Even if you didn't create any CR using the CRD API versions which have been
+# removed in Antrea v2.0, you will still need to run the antctl command, or the
+# upgrade will fail.
+
+# Upgrade API storage for all CRDs.
+# For usage information run: antctl upgrade api-storage --help
+# In particular, the script will upgrade the system Tier CRs managed by Antrea
+# to the new storage version (v1beta1) if needed. If you never installed a minor
+# version of Antrea older than v1.13 in your cluster, you may not see any CRD
+# upgrade.
+$ antctl upgrade api-storage
+Skip upgrading CRD "externalnodes.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "trafficcontrols.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "externalentities.crd.antrea.io" since all stored objects are in the storage version.
+Skip upgrading CRD "supportbundlecollections.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreaagentinfos.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "ippools.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreacontrollerinfos.crd.antrea.io" since it only has one version.
+Upgrading 6 objects of CRD "tiers.crd.antrea.io".
+Successfully upgraded 6 objects of CRD "tiers.crd.antrea.io".
+
+# You can now upgrade to Antrea v2.0 successfully, starting with CRDs.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea-crds.yml
+$ helm upgrade antrea antrea/antrea --namespace kube-system --version 2.0.0
+```
+
+#### Case 3: upgrading from Antrea v1.12 (or older) with kubectl
+
+```text
+# Start by upgrading to Antrea v1.15.1.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.15.1/antrea.yml
+
+# Download antctl from the v1.15.1 release assets.
+$ antctl version
+antctlVersion: v1.15.1
+controllerVersion: v1.15.1
+
+# Upgrade API storage for all CRDs.
+# For usage information run: antctl upgrade api-storage --help
+# In particular, the script will upgrade the system Tier CRs managed by Antrea
+# to the new storage version (v1beta1).
+$ antctl upgrade api-storage
+Skip upgrading CRD "externalnodes.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "trafficcontrols.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "externalentities.crd.antrea.io" since all stored objects are in the storage version.
+Skip upgrading CRD "supportbundlecollections.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreaagentinfos.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "ippools.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreacontrollerinfos.crd.antrea.io" since it only has one version.
+Upgrading 6 objects of CRD "tiers.crd.antrea.io".
+Successfully upgraded 6 objects of CRD "tiers.crd.antrea.io".
+
+# You can now upgrade to Antrea v2.0 successfully.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea.yml
+```
+
+#### Case 4: upgrading from Antrea v1.12 (or older) with Helm
+
+```text
+# Start by upgrading to Antrea v1.15.1, starting with CRDs.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.15.1/antrea-crds.yml
+$ helm upgrade antrea antrea/antrea --namespace kube-system --version 1.15.1
+
+# Download antctl from the v1.15.1 release assets.
+$ antctl version
+antctlVersion: v1.15.1
+controllerVersion: v1.15.1
+
+# Upgrade API storage for all CRDs.
+# For usage information run: antctl upgrade api-storage --help
+# In particular, the script will upgrade the system Tier CRs managed by Antrea
+# to the new storage version (v1beta1).
+$ antctl upgrade api-storage
+Skip upgrading CRD "externalnodes.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "trafficcontrols.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "externalentities.crd.antrea.io" since all stored objects are in the storage version.
+Skip upgrading CRD "supportbundlecollections.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreaagentinfos.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "ippools.crd.antrea.io" since it only has one version.
+Skip upgrading CRD "antreacontrollerinfos.crd.antrea.io" since it only has one version.
+Upgrading 6 objects of CRD "tiers.crd.antrea.io".
+Successfully upgraded 6 objects of CRD "tiers.crd.antrea.io".
+
+# You can now upgrade to Antrea v2.0 successfully, starting with CRDs.
+$ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea-crds.yml
+$ helm upgrade antrea antrea/antrea --namespace kube-system --version 2.0.0
+```
+
+### Other upgrade considerations
+
+Some deprecated options have been removed from the Antrea configuration:
+
+* `nplPortRange` has been removed from the Agent configuration, use
+ `nodePortLocal.portRange` instead.
+* `enableIPSecTunnel` has been removed from the Agent configuration, use
+ `trafficEncryptionMode` instead.
+* `multicastInterfaces` has been removed from the Agent configuration, use
+ `multicast.multicastInterfaces` instead.
+* `multicluster.enable` has been removed from the Agent configuration, as the
+ Multicluster functionality is no longer gated by a boolean parameter.
+* `legacyCRDMirroring` has been removed from the Controller configuration, as it
+ dates back to the v1 major version bump, and it has been ignored for years.
+
+If you are porting your old Antrea configuration to Antrea v2, please make sure
+that you are no longer using these parameters. Unknown parameters will be
+ignored by Antrea, and the behavior may not be what you expect.
+
+[Semantic Versioning]: https://semver.org/
+[CHANGELOG]: ../CHANGELOG.md
+[EKS]: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
+[AKS]: https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions
+[GKE]: https://cloud.google.com/kubernetes-engine/docs/release-notes
+[`deprecated` field]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation
+[conversion webhook]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion
diff --git a/content/docs/v2.2.0-alpha.2/docs/windows.md b/content/docs/v2.2.0-alpha.2/docs/windows.md
new file mode 100644
index 00000000..871de347
--- /dev/null
+++ b/content/docs/v2.2.0-alpha.2/docs/windows.md
@@ -0,0 +1,456 @@
+# Deploying Antrea on Windows
+
+## Table of Contents
+
+
+- [Overview](#overview)
+ - [Components that run on Windows](#components-that-run-on-windows)
+ - [Antrea Windows demo](#antrea-windows-demo)
+- [Deploying Antrea on Windows worker Nodes](#deploying-antrea-on-windows-worker-nodes)
+ - [Prerequisites](#prerequisites)
+ - [Installation as a Pod](#installation-as-a-pod)
+ - [Download & Configure Antrea for Linux](#download--configure-antrea-for-linux)
+ - [Add Windows antrea-agent DaemonSet](#add-windows-antrea-agent-daemonset)
+ - [Join Windows worker Nodes](#join-windows-worker-nodes)
+ - [1. (Optional) Install OVS (provided by Antrea or your own)](#1-optional-install-ovs-provided-by-antrea-or-your-own)
+ - [2. Disable Windows Firewall](#2-disable-windows-firewall)
+ - [3. Install kubelet, kubeadm and configure kubelet startup params](#3-install-kubelet-kubeadm-and-configure-kubelet-startup-params)
+ - [4. Prepare Node environment needed by antrea-agent](#4-prepare-node-environment-needed-by-antrea-agent)
+ - [5. Run kubeadm to join the Node](#5-run-kubeadm-to-join-the-node)
+ - [Verify your installation](#verify-your-installation)
+ - [Installation as a Service](#installation-as-a-service)
+ - [Manually run antrea-agent on Windows worker Nodes](#manually-run-antrea-agent-on-windows-worker-nodes)
+- [Known issues](#known-issues)
+
+
+## Overview
+
+Antrea supports Windows worker Nodes. On Windows Nodes, Antrea sets up an overlay
+network to forward packets between Nodes and implements NetworkPolicies. Currently
+Geneve, VXLAN, and STT tunnels are supported.
+
+This page shows how to install antrea-agent on Windows Nodes and register the
+Node to an existing Kubernetes cluster.
+
+For the detailed design of how antrea-agent works on Windows, please refer to
+the [design doc](design/windows-design.md).
+
+**Note: Docker support on Windows Nodes was dropped completely in Antrea v2.0,
+ making containerd the only supported container runtime. As part of this
+ change, we renamed the `antrea-windows-containerd.yml` manifest to
+ `antrea-windows.yml`, and the `antrea-windows-containerd-with-ovs.yml`
+ manifest to `antrea-windows-with-ovs.yml`. Prior to the Antrea v2.0 release,
+ the `antrea-windows.yml` manifest was used to support Windows Nodes with
+ Docker. For the best experience, make sure that you refer to the version of
+ the documentation that matches the Antrea version you are deploying.**
+
+### Components that run on Windows
+
+The following components should be configured and run on the Windows Node.
+
+* [kubernetes components](https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-nodes/)
+* OVS daemons
+* antrea-agent
+
+antrea-agent and the OVS daemons can either run as Pods (containerized) or as
+Windows services, and the following configurations are supported:
+
+| OVS daemons | antrea-agent | Supported | Refer to |
+| ---------------- | ---------------- | ----------------- | -------- |
+| Containerized | Containerized | Yes (recommended) | [Installation as a Pod](#installation-as-a-pod) |
+| Containerized | Windows Service | No | N/A |
+| Windows Services | Containerized | Yes | [Installation as a Pod](#installation-as-a-pod) |
+| Windows Services | Windows Services | Yes | [Installation as a Service](#installation-as-a-service) |
+
+### Antrea Windows demo
+
+Watch this [demo video](https://www.youtube.com/watch?v=NjeVPGgaNFU) of running
+Antrea in a Kubernetes cluster with both Linux and Windows Nodes. The demo also
+shows the Antrea OVS bridge configuration on a Windows Node, and NetworkPolicy
+enforcement for Windows Pods. Note, OVS driver and daemons are pre-installed on
+the Windows Nodes in the demo.
+
+## Deploying Antrea on Windows worker Nodes
+
+Running Antrea on Windows Nodes requires the containerd container runtime. The
+recommended installation method is [Installation as a
+Pod](#installation-as-a-pod), and it requires containerd 1.6 or higher. If you
+prefer running the Antrea Agent as a Windows service, or if you are using
+containerd 1.5, you can use the [Installation as a
+Service](#installation-as-a-service) method.
+
+Starting from v2.1, Antrea Windows image is built on Linux host with docker buildx
+and uses [hpc](https://github.com/microsoft/windows-host-process-containers-base-image)
+as the base image.
+
+### Prerequisites
+
+* Create a Kubernetes cluster.
+* Obtain a Windows Server 2019 license (or higher) in order to configure the
+ Windows Nodes that will host Windows containers. And install the latest
+ Windows updates.
+* On each Windows Node, install the following:
+ - [Hyper-V](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server)
+ with management tools. If your Nodes do not have the virtualization
+ capabilities required by Hyper-V, use the workaround described in the
+ [Known issues](#known-issues) section.
+ - [containerd](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=containerd#windows-server-1).
+
+### Installation as a Pod
+
+This installation method requires Antrea v1.10 or higher, and containerd 1.6 or
+higher (containerd 1.7 or higher is recommended). It relies on support for
+[Windows HostProcess Pods](https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/),
+which is generally available starting with K8s 1.26.
+
+More detailed containerd Version requirements are outlined below:
+
+| Kubernetes Version | Recommended containerd Version |
+| ------------------- | ---------------- |
+| 1.26 | 1.7.0+, 1.6.18+ |
+| 1.27 | 1.7.0+, 1.6.18+ |
+| 1.28 | 1.7.0+, 1.6.18+ |
+| 1.29 | 1.7.11+, 1.6.27+ |
+| 1.30 | 1.7.13+, 1.6.28+ |
+
+Note: Starting from Antrea v2.1, Antrea Windows image is built based on the HPC (Host
+Process Containers) image, containerd version 1.6.18 or higher is required because
+versions earlier than 1.6.18 do not support importing HPC images on Windows.
+
+For more detailed information on Kubernetes-supported containerd versions, refer to the [Containerd releases page](https://containerd.io/releases/#kubernetes-support)
+
+Starting with Antrea v1.13, Antrea takes over all the responsibilities of
+kube-proxy for Windows Nodes by default, and kube-proxy should not be deployed
+on Windows Nodes with Antrea.
+
+#### Download & Configure Antrea for Linux
+
+Deploy Antrea for Linux on the control-plane Node following [Getting started](getting-started.md)
+document. The following command deploys Antrea with the version specified by ``:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+#### Add Windows antrea-agent DaemonSet
+
+You need to manually set the `kubeAPIServerOverride` field in the YAML
+configuration file as the Antrea Proxy `proxyAll` mode is enabled by default.
+
+```yaml
+ # Provide the address of Kubernetes apiserver, to override any value provided in kubeconfig or InClusterConfig.
+ # Defaults to "". It must be a host string, a host:port pair, or a URL to the base of the apiserver.
+ kubeAPIServerOverride: "10.10.1.1:6443"
+
+ # Option antreaProxy contains Antrea Proxy related configuration options.
+ antreaProxy:
+ # Option proxyAll tells antrea-agent to proxy ClusterIP Service traffic, regardless of where they come from.
+ # Therefore, running kube-proxy is no longer required. This requires the Antrea Proxy feature to be enabled.
+ # Note that this option is experimental. If kube-proxy is removed, option kubeAPIServerOverride must be used to access
+ # apiserver directly.
+ proxyAll: true
+```
+
+You can run both the Antrea Agent and the OVS daemons on Windows Nodes using a
+single DaemonSet, by applying the file `antrea-windows-with-ovs.yml`. This is
+the recommended installation method. The following commands download the
+manifest, set `kubeAPIServerOverride`, and create the DaemonSet:
+
+```bash
+KUBE_APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') && \
+curl -sL https://github.com/antrea-io/antrea/releases/download//antrea-windows-with-ovs.yml | \
+sed "s|.*kubeAPIServerOverride: \"\"| kubeAPIServerOverride: \"${KUBE_APISERVER}\"|g" | \
+kubectl apply -f -
+```
+
+Alternatively, to deploy the antrea-agent Windows DaemonSet without the OVS
+daemons, apply the file `antrea-windows.yml` with the following commands:
+
+```bash
+KUBE_APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') && \
+curl -sL https://github.com/antrea-io/antrea/releases/download//antrea-windows.yml | \
+sed "s|.*kubeAPIServerOverride: \"\"| kubeAPIServerOverride: \"${KUBE_APISERVER}\"|g" | \
+kubectl apply -f -
+```
+
+When using `antrea-windows.yml`, you will need to install OVS
+userspace daemons as services when you prepare your Windows worker Nodes, in the
+next section.
+
+#### Join Windows worker Nodes
+
+##### 1. (Optional) Install OVS (provided by Antrea or your own)
+
+Depending on which method you are using to install Antrea on Windows, and
+depending on whether you are using your own [signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/driver-signing)
+OVS kernel driver or you want to use the test-signed driver provided by Antrea,
+you will need to invoke the `Install-OVS.ps1` script differently (or not at all).
+
+| Containerized OVS daemons? | Test-signed OVS driver? | Run this command |
+| -------------------------- | ----------------------- |---------------------------------------------------------------------------|
+| Yes | Yes | Not required |
+| Yes | No | Not required |
+| No | Yes | `.\Install-OVS.ps1 -InstallUserspace $true` |
+| No | No | `.\Install-OVS.ps1 -InstallUserspace $true -LocalFile ` |
+
+If you used `antrea-windows-with-ovs.yml` to create the antrea-agent
+Windows DaemonSet, then you are using "Containerized OVS daemons". For all other
+methods, you are *not* using "Containerized OVS daemons".
+
+Antrea provides a pre-built OVS package which contains a test-signed OVS kernel
+driver. If you don't have a self-signed OVS package and just want to try Antrea
+on Windows, this package can be used for testing.
+
+**[Test-only]** If you are using test-signed driver (such as the one provided with Antrea),
+please make sure to [enable test-signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/the-testsigning-boot-configuration-option):
+
+```powershell
+Bcdedit.exe -set TESTSIGNING ON
+Restart-Computer
+```
+
+If you want to run OVS as Windows native services, and you are bringing
+your own OVS package with a signed OVS kernel driver, you would run:
+
+```powershell
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Install-OVS.ps1
+.\Install-OVS.ps1 -InstallUserspace $true -LocalFile
+
+# verify that the OVS services are installed
+get-service ovsdb-server
+get-service ovs-vswitchd
+```
+
+##### 2. Disable Windows Firewall
+
+```powershell
+Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False
+```
+
+##### 3. Install kubelet, kubeadm and configure kubelet startup params
+
+Firstly, install kubelet and kubeadm using the provided `PrepareNode.ps1`
+script. Specify the Node IP, Kubernetes Version and container runtime while
+running the script. The following command downloads and executes
+`Prepare-Node.ps1`:
+
+```powershell
+# Example:
+curl.exe -LO "https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Prepare-Node.ps1"
+.\Prepare-Node.ps1 -KubernetesVersion v1.30.0 -NodeIP 192.168.1.10
+```
+
+##### 4. Prepare Node environment needed by antrea-agent
+
+Run the following commands to prepare the Node environment needed by antrea-agent:
+
+```powershell
+mkdir c:\k\antrea
+cd c:\k\antrea
+$TAG="v2.0.0"
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Clean-AntreaNetwork.ps1
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Prepare-AntreaAgent.ps1
+# use -RunOVSServices $false for containerized OVS!
+.\Prepare-AntreaAgent.ps1 [-RunOVSServices $false]
+```
+
+The script `Prepare-AntreaAgent.ps1` performs the following tasks:
+
+* Remove stale network resources created by antrea-agent.
+
+ After the Windows Node reboots, there will be stale network resources which
+ need to be cleaned before starting antrea-agent.
+
+* Ensure OVS services are running.
+
+ This script starts OVS services on the Node if they are not running. This
+ step needs to be skipped in case of OVS containerization. In that case, you
+ need to specify the parameter `RunOVSServices` as false.
+
+ ```powershell
+ .\Prepare-AntreaAgent.ps1 -RunOVSServices $false
+ ```
+
+The script must be executed every time you restart the Node to prepare the
+environment for antrea-agent.
+
+You can ensure that the script is executed automatically after each Windows
+startup by using different methods. Here are two examples for your reference:
+
+* Example 1: Update kubelet service.
+
+Insert following line in kubelet service script `c:\k\StartKubelet.ps1` to invoke
+`Prepare-AntreaAgent.ps1` when starting kubelet service:
+
+```powershell
+& C:\k\antrea\Prepare-AntreaAgent.ps1 -RunOVSServices $false
+```
+
+* Example 2: Create a ScheduledJob that runs at startup.
+
+```powershell
+$trigger = New-JobTrigger -AtStartup -RandomDelay 00:00:30
+$options = New-ScheduledJobOption -RunElevated
+Register-ScheduledJob -Name PrepareAntreaAgent -Trigger $trigger -ScriptBlock { Invoke-Expression C:\k\antrea\Prepare-AntreaAgent.ps1 -RunOVSServices $false } -ScheduledJobOption $options
+```
+
+##### 5. Run kubeadm to join the Node
+
+On Windows Nodes, run the `kubeadm join` command to join the cluster. The token
+is provided by the control-plane Node. If you lost the token, or the token has
+expired, you can run `kubeadm token create --print-join-command` (on the
+control-plane Node) to generate a new token and join command. An example
+`kubeadm join` command is like below:
+
+```powershell
+kubeadm join 192.168.101.5:6443 --token tdp0jt.rshv3uobkuoobb4v --discovery-token-ca-cert-hash sha256:84a163e57bf470f18565e44eaa2a657bed4da9748b441e9643ac856a274a30b9
+```
+
+##### Verify your installation
+
+There will be temporary network interruption on Windows worker Node on the
+first startup of antrea-agent. It's because antrea-agent will set the OVS to
+take over the host network. After that you should be able to view the Windows
+Nodes and Pods in your cluster by running:
+
+```bash
+# Show Nodes
+kubectl get nodes -o wide -n kube-system
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+control-plane Ready control-plane 1h v1.29.0 10.176.27.168 Ubuntu 22.04.3 LTS 6.2.0-1017-generic containerd://1.6.26
+win-5akrf2tpq91 Ready 1h v1.29.0 10.176.27.150 Windows Server 2019 Datacenter 10.0.17763.5206 containerd://1.6.6
+win-5akrf2tpq92 Ready 1h v1.29.0 10.176.27.197 Windows Server 2019 Datacenter 10.0.17763.5206 containerd://1.6.6
+
+# Show antrea-agent Pods
+kubectl get pods -o wide -n kube-system | grep windows
+antrea-agent-windows-6hvkw 1/1 Running 0 100s
+```
+
+### Installation as a Service
+
+Install Antrea as usual. The following command deploys Antrea with the version
+specified by ``:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+When running the Antrea Agent as a Windows service, no DaemonSet is created for
+Windows worker Nodes. You will need to ensure that [nssm](https://nssm.cc/) is
+installed on all your Windows Nodes. `nssm` is a handy tool to manage services
+on Windows.
+
+To prepare your Windows worker Nodes, follow the steps in [Join Windows worker Nodes](#join-windows-worker-nodes).
+With this installation method, OVS daemons are always run as services (not
+containerized), and you will need to run `Install-OVS.ps1` to install them.
+
+When your Nodes are ready, run the following scripts to install the antrea-agent
+service. NOTE: ``, `` and
+`` should be set by you. For example:
+
+```powershell
+$KubernetesVersion="v1.29.0"
+$KubeConfig="C:/Users/Administrator/.kube/config" # admin kubeconfig
+$KubeletKubeconfigPath="C:/etc/kubernetes/kubelet.conf"
+```
+
+```powershell
+$TAG="v2.0.0"
+$KubernetesVersion=""
+$KubeConfig=""
+$KubeletKubeconfigPath=""
+$KubernetesHome="c:/k"
+$AntreaHome="c:/k/antrea"
+
+curl.exe -LO "https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Helper.psm1"
+Import-Module ./Helper.psm1
+
+Install-AntreaAgent -KubernetesVersion "$KubernetesVersion" -KubernetesHome "$KubernetesHome" -KubeConfig "$KubeConfig" -AntreaVersion "$TAG" -AntreaHome "$AntreaHome"
+
+New-DirectoryIfNotExist "${AntreaHome}/logs"
+nssm install antrea-agent "${AntreaHome}/bin/antrea-agent.exe" "--config=${AntreaHome}/etc/antrea-agent.conf --logtostderr=false --log_dir=${AntreaHome}/logs --alsologtostderr --log_file_max_size=100 --log_file_max_num=4"
+
+nssm set antrea-agent DependOnService ovs-vswitchd
+nssm set antrea-agent Start SERVICE_DELAYED_AUTO_START
+
+Start-Service antrea-agent
+```
+
+### Manually run antrea-agent on Windows worker Nodes
+
+Antrea also provides powershell scripts which help install and run the Antrea
+Agent manually, please complete the steps in
+[Installation](#installation-as-a-pod) section, and skip the
+[Add Windows antrea-agent DaemonSet](#add-windows-antrea-agent-daemonset) step.
+Then run the following commands in powershell:
+
+```powershell
+mkdir c:\k\antrea
+cd c:\k\antrea
+curl.exe -LO https://github.com/antrea-io/antrea/releases/download//Start-AntreaAgent.ps1
+# Run antrea-agent
+# $KubeConfigPath is the path of kubeconfig file
+./Start-AntreaAgent.ps1 -kubeconfig $KubeConfigPath
+```
+
+> Note: Some features such as supportbundle collection are not supported in this
+> way. It's recommended to run antrea-agent as a Pod.
+
+## Known issues
+
+1. HNS Network is not persistent on Windows. So after the Windows Node reboots,
+the HNS Network created by antrea-agent is removed, and the Open vSwitch
+Extension is disabled by default. In this case, the stale OVS bridge and ports
+should be removed. A help script [Clean-AntreaNetwork.ps1](https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Clean-AntreaNetwork.ps1)
+can be used to clean the OVS bridge.
+
+ ```powershell
+ # If OVS userspace processes were running as a Service on Windows host
+ ./Clean-AntreaNetwork.ps1 -OVSRunMode "service"
+ # If OVS userspace processes were running inside container in antrea-agent Pod
+ ./Clean-AntreaNetwork.ps1 -OVSRunMode "container"
+ ```
+
+2. Hyper-V feature cannot be installed on Windows Node due to the processor not
+having the required virtualization capabilities.
+
+ If the processor of the Windows Node does not have the required
+ virtualization capabilities. The installation of Hyper-V feature will fail
+ with the following error:
+
+ ```powershell
+ PS C:\Users\Administrator> Install-WindowsFeature Hyper-V
+
+ Success Restart Needed Exit Code Feature Result
+ ------- -------------- --------- --------------
+ False Maybe Failed {}
+ Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed.
+ 1. Hyper-V cannot be installed: The processor does not have required virtualization capabilities.
+ At line:1 char:1
+ + Install-WindowsFeature hyper-v
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : InvalidOperation: (Hyper-V:ServerComponentWrapper) [Install-WindowsFeature], Exception
+ + FullyQualifiedErrorId : Alteration_PrerequisiteCheck_Failed,Microsoft.Windows.ServerManager.Commands.AddWindowsF
+ eatureCommand
+ ```
+
+ The capabilities are required by the Hyper-V `hypervisor` components to
+ support [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container#hyper-v-isolation).
+ If you only need [Process Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container#process-isolation)
+ on the Nodes. You could apply the following workaround to skip CPU check for
+ Hyper-V feature installation.
+
+ ```powershell
+ # 1. Install containers feature
+ Install-WindowsFeature containers
+
+ # 2. Install Hyper-V management powershell module
+ Install-WindowsFeature Hyper-V-Powershell
+
+ # 3. Install Hyper-V feature without CPU check and disable the "hypervisor"
+ dism /online /enable-feature /featurename:Microsoft-Hyper-V /all /NoRestart
+ dism /online /disable-feature /featurename:Microsoft-Hyper-V-Online /NoRestart
+
+ # 4. Restart-Computer to take effect
+ Restart-Computer
+ ```
diff --git a/data/docs/toc-mapping.yml b/data/docs/toc-mapping.yml
index 5744c19a..0263bfda 100644
--- a/data/docs/toc-mapping.yml
+++ b/data/docs/toc-mapping.yml
@@ -63,3 +63,4 @@ v2.1.0-beta.0: v2.1.0-beta.0-toc
v2.1.0: v2.1.0-toc
v2.2.0-alpha.0: v2.2.0-alpha.0-toc
v2.2.0-alpha.1: v2.2.0-alpha.1-toc
+v2.2.0-alpha.2: v2.2.0-alpha.2-toc
diff --git a/data/docs/v2.2.0-alpha.2-toc.yml b/data/docs/v2.2.0-alpha.2-toc.yml
new file mode 100644
index 00000000..8d9e3994
--- /dev/null
+++ b/data/docs/v2.2.0-alpha.2-toc.yml
@@ -0,0 +1,125 @@
+toc:
+ - title: Introduction
+ subfolderitems:
+ - page: Overview
+ url: /
+ - page: Getting Started
+ url: /docs/getting-started
+ - page: Support for K8s Installers
+ url: /docs/kubernetes-installers
+ - page: Deploying on Kind
+ url: /docs/kind
+ - page: Deploying on Minikube
+ url: /docs/minikube
+ - page: Configuration
+ url: /docs/configuration
+ - page: Installing with Helm
+ url: /docs/helm
+ - title: Cloud Deployment
+ subfolderitems:
+ - page: EKS Installation
+ url: /docs/eks-installation
+ - page: AKS Installation
+ url: /docs/aks-installation
+ - page: GKE Installation (Alpha)
+ url: /docs/gke-installation
+ - page: Running Antrea In Policy Only Mode
+ url: /docs/design/policy-only
+ - title: Reference
+ subfolderitems:
+ - page: Antrea Network Policy
+ url: /docs/antrea-network-policy
+ - page: Antctl
+ url: /docs/antctl
+ - page: Architecture
+ url: /docs/design/architecture
+ - page: Traffic Encryption (Ipsec / WireGuard)
+ url: /docs/traffic-encryption
+ - page: Securing Control Plane
+ url: /docs/securing-control-plane
+ - page: Security considerations
+ url: /docs/security
+ - page: Troubleshooting
+ url: /docs/troubleshooting
+ - page: OS-specific Known Issues
+ url: /docs/os-issues
+ - page: OVS Pipeline
+ url: /docs/design/ovs-pipeline
+ - page: Feature Gates
+ url: /docs/feature-gates
+ - page: Antrea Proxy
+ url: /docs/antrea-proxy
+ - page: Network Flow Visibility
+ url: /docs/network-flow-visibility
+ - page: Traceflow Guide
+ url: /docs/traceflow-guide
+ - page: NoEncap and Hybrid Traffic Modes
+ url: /docs/noencap-hybrid-modes
+ - page: Egress Guide
+ url: /docs/egress
+ - page: NodePortLocal Guide
+ url: /docs/node-port-local
+ - page: Antrea IPAM Guide
+ url: /docs/antrea-ipam
+ - page: Exposing Services of type LoadBalancer
+ url: /docs/service-loadbalancer
+ - page: Traffic Control
+ url: /docs/traffic-control
+ - page: BGP Support
+ url: /docs/bgp-policy
+ - page: Versioning
+ url: /docs/versioning
+ - page: Antrea API Groups
+ url: /docs/api
+ - page: Antrea API Reference
+ url: /docs/api-reference
+ - title: Windows
+ subfolderitems:
+ - page: Windows Deployment
+ url: /docs/windows
+ - page: Windows Design
+ url: /docs/design/windows-design
+ - title: Integrations
+ subfolderitems:
+ - page: Octant Plugin Installation
+ url: /docs/octant-plugin-installation
+ - page: Prometheus Integration
+ url: /docs/prometheus-integration
+ - title: Cookbooks
+ subfolderitems:
+ - page: Using Antrea with Multus
+ url: /docs/cookbooks/multus
+ - page: Using Fluentd to collect Network policy logs
+ url: /docs/cookbooks/fluentd
+ - title: Multicluster
+ subfolderitems:
+ - page: Quick Start
+ url: /docs/multicluster/quick-start
+ - page: User guide
+ url: /docs/multicluster/user-guide
+ - page: Antctl
+ url: /docs/multicluster/antctl
+ - page: Architecture
+ url: /docs/multicluster/architecture
+ - title: Developer Guide
+ subfolderitems:
+ - page: Code Generation
+ url: /docs//contributors/code-generation
+ - page: Release Instructions
+ url: /docs/maintainers/release
+ - page: Issue Management
+ url: /docs/contributors/issue-management
+ - page: GitHub Labels
+ url: /docs/contributors/github-labels
+ - title: Project Information
+ subfolderitems:
+ - page: Contributing to Antrea
+ url: /contributing
+ - page: Roadmap
+ url: /roadmap
+ - page: Change Log
+ url: /changelog
+ - page: Code of Conduct
+ url: /code_of_conduct
+ - page: Antrea Adopters
+ url: /adopters