Skip to content

Commit

Permalink
Fix hardcoded link versions: v23.1 through v24.2
Browse files Browse the repository at this point in the history
  • Loading branch information
rmloveland committed Jul 26, 2024
1 parent 9b075a6 commit 37ae793
Show file tree
Hide file tree
Showing 13 changed files with 16 additions and 16 deletions.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.1/data-domiciling.md %}) requirements.
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements.
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements.
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with [data domiciling]({% link v23.2/data-domiciling.md %}) requirements.
CockroachDB {{ site.data.products.serverless }} clusters operate with a [different architecture]({% link cockroachcloud/architecture.md %}#cockroachdb-serverless) compared to CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }} clusters. These architectural differences have implications for how locality-aware backups can run. Serverless clusters will scale resources depending on whether they are actively in use, which means that it is less likely to have a SQL pod available in every locality. As a result, your Serverless cluster may not have a SQL pod in the locality where the data resides, which can lead to the cluster uploading that data to a storage bucket in a locality where you do have active SQL pods. You should consider this as you plan a backup strategy that must comply with {% if page.title contains "Cloud" or page.title contains "Serverless" %} [data domiciling]({% link {{site.current_cloud_version}}/data-domiciling.md %}) {% else %} [data domiciling]({% link {{page.version.version}}/data-domiciling.md %}) {% endif %} requirements.
2 changes: 1 addition & 1 deletion src/current/cockroachcloud/backup-and-restore-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,4 +217,4 @@ For practical examples of running backup and restore jobs, watch the following v

- Considerations for using [backup](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/backup#considerations) and [restore](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restore#considerations).
- [Backup collections](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/take-full-and-incremental-backups#backup-collections) for details on how CockroachDB stores backups.
- [Restoring backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restoring-backups-across-versions) across major versions of CockroachDB.
- [Restoring backups](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/restoring-backups-across-versions) across major versions of CockroachDB.
2 changes: 1 addition & 1 deletion src/current/cockroachcloud/cmek.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ This section describes some of the ways that CMEK can help you protect your data

<ul><li><p>If a CMEK key is destroyed, the cluster's data can't be recovered by you or by CockroachDB {{ site.data.products.cloud }}, even by restoring from a CockroachDB {{ site.data.products.cloud }}-managed backup. After enabling CMEK, do not disable, schedule for destruction, or destroy a CMEK that is in use by clusters. Instead, first rotate the cluster to use a new CMEK or decommission the cluster, and then use your KMS platform's audit logs to verify that the CMEK is no longer being used.</p></li><li><p>To protect against inadvertent data loss, your KMS platform may impose a waiting period before a key is permanently deleted. This waiting period may be configurable when you create the key. Check the documentation for your KMS platform for details about how long before a key deletion is permanent and irreversible.</p></li></ul>
{{site.data.alerts.end}}
- **Enforcement of data domiciling and locality requirements**: In a multi-region cluster, you can confine an individual database to a single region or multiple regions. For more information and limitations, see [Data Domiciling with CockroachDB](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/data-domiciling). When you enable CMEK on a multi-region cluster, you can optionally assign a separate CMEK key to each region, or use the same CMEK key for multiple related regions.
- **Enforcement of data domiciling and locality requirements**: In a multi-region cluster, you can confine an individual database to a single region or multiple regions. For more information and limitations, see [Data Domiciling with CockroachDB]({% link {{site.current_cloud_version}}/data-domiciling.md %}). When you enable CMEK on a multi-region cluster, you can optionally assign a separate CMEK key to each region, or use the same CMEK key for multiple related regions.
- **Enforcement of encryption requirements**: With CMEK, you have control the CMEK key's encryption strength. The CMEK key's size is determined by what your KMS provider supports.

You can use your KMS platform's controls to configure the regions where the CMEK key is available, enable automatic rotation schedules for CMEK keys, and view audit logs that show each time the CMEK key is used by CockroachDB {{ site.data.products.cloud }}. CockroachDB {{ site.data.products.cloud }} does not need any visibility into these details.
Expand Down
2 changes: 1 addition & 1 deletion src/current/v23.2/operational-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ docs_area: get_started
## Why is my process hanging when I try to start nodes with the `--background` flag?

{{site.data.alerts.callout_info}}
Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely.
Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely.

If you do use `--background`, you should also set `--pid-file`. To stop or restart a cluster, send `SIGTERM` or `SIGHUP` signal to the process ID in the PID file.
{{site.data.alerts.end}}
Expand Down
4 changes: 2 additions & 2 deletions src/current/v23.2/start-a-local-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The store directory is `cockroach-data/` in the same directory as the `cockroach

## Step 1. Start the cluster

This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd).
This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd).

1. Use the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command to start the `node1` in the foreground:

Expand All @@ -43,7 +43,7 @@ This section shows how to start a cluster interactively. In production, operator
{{site.data.alerts.callout_info}}
The `--background` flag is not recommended. If you decide to start nodes in the background, you must also pass the `--pid-file` argument. To stop a `cockroach` process running in the background, extract the process ID from the PID file and pass it to the command to [stop the node](#step-7-stop-the-cluster).

In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd).
In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd).
{{site.data.alerts.end}}

You'll see a message like the following:
Expand Down
4 changes: 2 additions & 2 deletions src/current/v24.1/work-with-virtual-clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ To connect to the system virtual cluster using the DB Console, add the `GET` URL

To [grant]({% link {{ page.version.version }}/grant.md %}) access to the system virtual cluster, you must connect to the system virtual cluster as a user with the `admin` role, then grant either of the following to the SQL user:

- The `admin` [role]({% link v23.2/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster.
- The `VIEWSYSTEMDATA` [system privilege]({% link v23.2/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster.
- The `admin` [role]({% link {{page.version.version}}/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster.
- The `VIEWSYSTEMDATA` [system privilege]({% link {{page.version.version}}/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster.

To prevent unauthorized access, you should limit the users with access to the system virtual cluster.

Expand Down
2 changes: 1 addition & 1 deletion src/current/v24.2/cockroach-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ Therefore, if you enable WAL failover, you must also update your [logging]({% li
- (**Recommended**) Configure [remote log sinks]({% link {{page.version.version}}/logging-use-cases.md %}#network-logging) that are not correlated with the availability of your cluster's local disks.
- If you must log to local disks:
1. Disable [audit logging]({% link {{ page.version.version }}/sql-audit-logging.md %}). File-based audit logging and the WAL failover feature cannot coexist. File-based audit logging provides guarantees that every log message makes it to disk, otherwise CockroachDB needs to shut down. Because of this, resuming operations in the face of disk unavailability is not compatible with audit logging.
1. Enable asynchronous buffering of [`file-groups` log sinks]({% link {{ page.version.version }}/configure-logs.md %}#output-to-files) using the `buffering` configuration option. The `buffering` configuration can be applied to [`file-defaults`]({% link {{ page.version.version }}/configure-logs.md %}#configure-logging-defaults) or individual `file-groups` as needed. Note that enabling asynchronous buffering of `file-groups` log sinks is in [preview]({% link v24.1/cockroachdb-feature-availability.md %}#features-in-preview).
1. Enable asynchronous buffering of [`file-groups` log sinks]({% link {{ page.version.version }}/configure-logs.md %}#output-to-files) using the `buffering` configuration option. The `buffering` configuration can be applied to [`file-defaults`]({% link {{ page.version.version }}/configure-logs.md %}#configure-logging-defaults) or individual `file-groups` as needed. Note that enabling asynchronous buffering of `file-groups` log sinks is in [preview]({% link {{page.version.version}}/cockroachdb-feature-availability.md %}#features-in-preview).
1. Set `max-staleness: 1s` and `flush-trigger-size: 256KiB`.
1. When `buffering` is enabled, `buffered-writes` must be explicitly disabled as shown below. This is necessary because `buffered-writes` does not provide true asynchronous disk access, but rather a small buffer. If the small buffer fills up, it can cause internal routines performing logging operations to hang. This in turn will cause internal routines doing other important work to hang, potentially affecting cluster stability.
1. The recommended logging configuration for using file-based logging with WAL failover is as follows:
Expand Down
2 changes: 1 addition & 1 deletion src/current/v24.2/operational-faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ docs_area: get_started
## Why is my process hanging when I try to start nodes with the `--background` flag?

{{site.data.alerts.callout_info}}
Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely.
Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely.

If you do use `--background`, you should also set `--pid-file`. To stop or restart a cluster, send `SIGTERM` or `SIGHUP` signal to the process ID in the PID file.
{{site.data.alerts.end}}
Expand Down
Loading

0 comments on commit 37ae793

Please sign in to comment.