diff --git a/src/current/_config_cockroachdb.yml b/src/current/_config_cockroachdb.yml index 06587c7afed..f1eae8a8873 100644 --- a/src/current/_config_cockroachdb.yml +++ b/src/current/_config_cockroachdb.yml @@ -1,7 +1,7 @@ baseurl: /docs -current_cloud_version: v24.1 +current_cloud_version: v24.2 destination: _site/docs homepage_title: CockroachDB Docs versions: - stable: v24.1 + stable: v24.2 dev: v24.2 diff --git a/src/current/_config_cockroachdb_local.yml b/src/current/_config_cockroachdb_local.yml index bf9b034ccd1..4ce42acddd9 100644 --- a/src/current/_config_cockroachdb_local.yml +++ b/src/current/_config_cockroachdb_local.yml @@ -6,7 +6,6 @@ exclude: - "v19.1" - "v19.2" - "v20.1" -- "v20.2" - "v21.1" - "ci" - "scripts" diff --git a/src/current/_data/alerts.yml b/src/current/_data/alerts.yml index c0d0c856ed1..3a3564ec93a 100644 --- a/src/current/_data/alerts.yml +++ b/src/current/_data/alerts.yml @@ -2,11 +2,13 @@ tip: ' @@ -1171,7 +1171,7 @@ Parameter | Description `{host}` | The host on which the CockroachDB node is running. `{port}` | The port at which the CockroachDB node is listening. `{database}` | The name of the (existing) database. -`{root-cert}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the root certificate that you [downloaded from the CockroachDB Cloud Console](https://www.cockroachlabs.com/docs/cockroachcloud/authentication#node-identity-verification). +`{root-cert}` | The [URL-encoded](https://wikipedia.org/wiki/Percent-encoding) path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification). @@ -1196,7 +1196,7 @@ Parameter | Description
{{site.data.alerts.callout_info}} -To connect to a CockroachDB {{ site.data.products.serverless }} cluster from a Ruby application, you must have a valid CA certificate located at ~/.postgresql/root.crt.
For instructions on downloading a CA certificate from the CockroachDB {{ site.data.products.cloud }} Console, see Connect to a CockroachDB {{ site.data.products.serverless }} Cluster. +To connect to a CockroachDB {{ site.data.products.serverless }} cluster from a Ruby application, you must have a valid CA certificate located at ~/.postgresql/root.crt.
For instructions on downloading a CA certificate from the CockroachDB {{ site.data.products.cloud }} Console, see Connect to a CockroachDB {{ site.data.products.serverless }} Cluster. {{site.data.alerts.end}}
@@ -1331,7 +1331,7 @@ Parameter | Description `{host}` | The host on which the CockroachDB node is running. `{port}` | The port at which the CockroachDB node is listening. `{database}` | The name of the (existing) database. -`{root-cert}` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console](https://www.cockroachlabs.com/docs/cockroachcloud/authentication#node-identity-verification). +`{root-cert}` | The path to the root certificate that you [downloaded from the CockroachDB Cloud Console]({% link cockroachcloud/authentication.md %}#node-identity-verification).
diff --git a/src/current/v24.2/connection-parameters.md b/src/current/v24.2/connection-parameters.md index 5be9f007c96..b7a22d63cc7 100644 --- a/src/current/v24.2/connection-parameters.md +++ b/src/current/v24.2/connection-parameters.md @@ -91,6 +91,7 @@ Parameter | Description | Default value `sslkey` | Path to the [client private key]({% link {{ page.version.version }}/cockroach-cert.md %}), when `sslmode` is not `disable`. | Empty string. `password` | The SQL user's password. It is not recommended to pass the password in the URL directly.

Note that passwords with special characters must be passed as [query string parameters](#additional-connection-parameters) (e.g., `postgres://maxroach@localhost:26257/movr?password=`) and not as a component in the connection URL (e.g., `postgres://maxroach:@localhost:26257/movr`). | Empty string `options` | [Additional options](#supported-options-parameters) to be passed to the server. | Empty string +`results_buffer_size` | Default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. Can also be set using the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size). Can be set as a top-level query parameter or as an `options` parameter. #### Supported `options` parameters @@ -98,8 +99,9 @@ CockroachDB supports the following `options` parameters. After the first `option Parameter | Description ----------|------------- -`--cluster=` | Identifies your tenant cluster on a [multi-tenant host](https://www.cockroachlabs.com/docs/cockroachcloud/architecture#architecture). For example, `funny-skunk-123`. This option is deprecated. The `host` in the connection string now includes the tenant information. +`--cluster=` | Identifies your tenant cluster on a [multi-tenant host]({% link cockroachcloud/architecture.md %}#architecture). For example, `funny-skunk-123`. This option is deprecated. The `host` in the connection string now includes the tenant information. `-c =` | Sets a [session variable]({% link {{ page.version.version }}/set-vars.md %}) for the SQL session. +`results_buffer_size` | Default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. Can also be set using the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size). Can be set as a top-level query parameter or as an `options` parameter. {{site.data.alerts.callout_info}} Note that some drivers require certain characters to be properly encoded in URL connection strings. For example, spaces in [a JDBC connection string](https://jdbc.postgresql.org/documentation/use/#connection-parameters) must be specified as `%20`. diff --git a/src/current/v24.2/cost-based-optimizer.md b/src/current/v24.2/cost-based-optimizer.md index 93e69a066cb..6467f86a6a9 100644 --- a/src/current/v24.2/cost-based-optimizer.md +++ b/src/current/v24.2/cost-based-optimizer.md @@ -236,15 +236,15 @@ You can disable statement plans that perform full table scans with the [`disallo ## Control whether the optimizer uses an index -You can specify [whether an index is visible]({% link {{ page.version.version }}/alter-index.md %}#not-visible) to the cost-based optimizer. By default, indexes are visible. If not visible, the index will not be used in queries unless it is specifically selected with an [index hint]({% link {{ page.version.version }}/indexes.md %}#selection). +You can specify [whether an index is visible]({% link {{ page.version.version }}/alter-index.md %}#not-visible) to the cost-based optimizer. By default, indexes are visible. If not visible, the index will not be used in queries unless it is specifically selected with an [index hint]({% link {{ page.version.version }}/indexes.md %}#selection). This allows you to create an index and check for query plan changes without affecting production queries. For an example, see [Set an index to be not visible]({% link {{ page.version.version }}/alter-index.md %}#set-an-index-to-be-not-visible). -This allows you to create an index and check for query plan changes without affecting production queries. For an example, see [Set an index to be not visible]({% link {{ page.version.version }}/alter-index.md %}#set-an-index-to-be-not-visible). +You can also set an index as [partially visible]({% link {{ page.version.version }}/alter-index.md %}#visibility) within a range of `0.0` to `1.0`, where `0.0` means not visible and `1.0` means visible. Any value between `0.0` and `1.0` means that an index is visible to the specified fraction of queries. {% include {{ page.version.version }}/sql/partially-visible-indexes.md %} {{site.data.alerts.callout_info}} Indexes that are not visible are still used to enforce `UNIQUE` and `FOREIGN KEY` [constraints]({% link {{ page.version.version }}/constraints.md %}). For more considerations, see [Index visibility considerations]({% link {{ page.version.version }}/alter-index.md %}#not-visible). {{site.data.alerts.end}} -You can instruct the optimizer to use indexes marked as `NOT VISIBLE` with the [`optimizer_use_not_visible_indexes` session variable]({% link {{ page.version.version }}/set-vars.md %}#optimizer-use-not-visible-indexes). By default, the variable is set to `off`. +You can instruct the optimizer to use indexes marked as not visible with the [`optimizer_use_not_visible_indexes` session variable]({% link {{ page.version.version }}/set-vars.md %}#optimizer-use-not-visible-indexes). By default, the variable is set to `off`. ## Locality optimized search in multi-region clusters @@ -277,27 +277,77 @@ Only tables with `ZONE` [survivability]({% link {{ page.version.version }}/multi ## Query plan cache -CockroachDB uses a cache for the query plans generated by the optimizer. This can lead to faster query execution since the database can reuse a query plan that was previously calculated, rather than computing a new plan each time a query is executed. +CockroachDB caches some of the query plans generated by the optimizer. The query plan cache is used for the following types of statements: + +- Prepared statements. +- Non-prepared statements using identical constant values. + +Caching query plans leads to faster query execution: rather than generating a new plan each time a query is executed, CockroachDB reuses a query plan that was previously generated. The query plan cache is enabled by default. To disable it, execute the following statement: {% include_cached copy-clipboard.html %} ~~~ sql -> SET CLUSTER SETTING sql.query_cache.enabled = false; +SET CLUSTER SETTING sql.query_cache.enabled = false; ~~~ -Only the following statements use the plan cache: +The following statements can use the plan cache: [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}), [`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}), and [`DELETE`]({% link {{ page.version.version }}/delete.md %}). -- [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}) -- [`INSERT`]({% link {{ page.version.version }}/insert.md %}) -- [`UPDATE`]({% link {{ page.version.version }}/update.md %}) -- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}) -- [`DELETE`]({% link {{ page.version.version }}/delete.md %}) +Two types of plans can be cached: custom and generic. Refer to [Query plan type](#query-plan-type). -The optimizer can use cached plans if they are: +### Query plan type -- Prepared statements. -- Non-prepared statements using identical constant values. +The following types of plans can be cached: + +- *Custom* query plans are generated for a given query structure and optimized for specific placeholder values, and are re-optimized on subsequent executions. By default, the optimizer uses custom plans. +- {% include_cached new-in.html version="v24.2" %} *Generic* query plans are generated and optimized once without considering specific placeholder values, and are **not** regenerated on subsequent executions, unless the plan becomes stale due to [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) or new [table statistics](#table-statistics) and must be re-optimized. This approach eliminates most of the query latency attributed to planning. + + Generic query plans require an [Enterprise license]({% link {{ page.version.version }}/enterprise-licensing.md %}). This feature is in [preview]({% link {{ page.version.version }}/cockroachdb-feature-availability.md %}) and is subject to change. + + {{site.data.alerts.callout_success}} + Generic query plans will only benefit workloads that use prepared statements, which are issued via explicit `PREPARE` statements or by client libraries using the [PostgreSQL extended wire protocol](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY). Generic query plans are most beneficial for queries with high planning times, such as queries with many [joins]({% link {{ page.version.version }}/joins.md %}). For more information on reducing planning time for such queries, refer to [Reduce planning time for queries with many joins](#reduce-planning-time-for-queries-with-many-joins). + {{site.data.alerts.end}} + +To change the type of plan that is cached, use the [`plan_cache_mode`]({% link {{ page.version.version }}/session-variables.md %}#plan-cache-mode) session setting. This setting applies when a statement is executed, not when it is prepared. Statements are therefore not associated with a specific query plan type when they are prepared. + +The following modes can be set: + +- `force_custom_plan` (default): Force the use of custom plans. +- `force_generic_plan`: Force the use of generic plans. +- `auto`: Automatically determine whether to use custom or generic query plans for prepared statements. Custom plans are used for the first five statement executions. Subsequent executions use a generic plan if its estimated cost is not significantly higher than the average cost of the preceding custom plans. + +{{site.data.alerts.callout_info}} +Generic plans are always used for non-prepared statements that do not contain placeholders or [stable functions]({% link {{ page.version.version }}/functions-and-operators.md %}#function-volatility), regardless of the `plan_cache_mode` setting. +{{site.data.alerts.end}} + +In some cases, generic query plans are less efficient than custom plans. For this reason, Cockroach Labs recommends setting `plan_cache_mode` to `auto` instead of `force_generic_plan`. Under the `auto` setting, the optimizer avoids bad generic plans by falling back to custom plans. For example: + +Set `plan_cache_mode` to `auto` at the session level: + +{% include_cached copy-clipboard.html %} +~~~ sql +SET plan_cache_mode = auto +~~~ + +At the [database level]({% link {{ page.version.version }}/alter-database.md %}#set-session-variable): + +{% include_cached copy-clipboard.html %} +~~~ sql +ALTER DATABASE db SET plan_cache_mode = auto; +~~~ + +At the [role level]({% link {{ page.version.version }}/alter-role.md %}#set-default-session-variable-values-for-a-role): + +{% include_cached copy-clipboard.html %} +~~~ sql +ALTER ROLE db_user SET plan_cache_mode = auto; +~~~ + +To verify the plan type used by a query, check the [`EXPLAIN ANALYZE`]({% link {{ page.version.version }}/explain-analyze.md %}) output for the query. + +- If a generic query plan is optimized for the current execution, the `plan type` in the output is `generic, re-optimized`. +- If a generic query plan is reused for the current execution without performing optimization, the `plan type` in the output is `generic, reused`. +- If a custom query plan is used for the current execution, the `plan type` in the output is `custom`. ## Join reordering @@ -309,7 +359,7 @@ To change this setting, which is controlled by the `reorder_joins_limit` [sessio {% include_cached copy-clipboard.html %} ~~~ sql -> SET reorder_joins_limit = 0; +SET reorder_joins_limit = 0; ~~~ To disable this feature, set the variable to `0`. You can configure the default `reorder_joins_limit` session setting with the [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) `sql.defaults.reorder_joins_limit`, which has a default value of `8`. @@ -328,6 +378,7 @@ The cost-based optimizer explores multiple join orderings to find the lowest-cos - To limit the size of the subtree that can be reordered, set the `reorder_joins_limit` [session variable]({% link {{ page.version.version }}/set-vars.md %}) to a lower value, for example: + {% include_cached copy-clipboard.html %} ~~~ sql SET reorder_joins_limit = 2; ~~~ diff --git a/src/current/v24.2/create-and-configure-changefeeds.md b/src/current/v24.2/create-and-configure-changefeeds.md index 7427a07ab4e..17f5686dedc 100644 --- a/src/current/v24.2/create-and-configure-changefeeds.md +++ b/src/current/v24.2/create-and-configure-changefeeds.md @@ -22,7 +22,7 @@ This page describes: ### Enable rangefeeds -Changefeeds connect to a long-lived request (i.e., a rangefeed), which pushes changes as they happen. This reduces the latency of row changes, as well as reduces transaction restarts on tables being watched by a changefeed for some workloads. +Changefeeds connect to a long-lived request called a _rangefeed_, which pushes changes as they happen. This reduces the latency of row changes, as well as reduces transaction restarts on tables being watched by a changefeed for some workloads. **Rangefeeds must be enabled for a changefeed to work.** To [enable the cluster setting]({% link {{ page.version.version }}/set-cluster-setting.md %}): @@ -31,9 +31,9 @@ Changefeeds connect to a long-lived request (i.e., a rangefeed), which pushes ch SET CLUSTER SETTING kv.rangefeed.enabled = true; ~~~ -If you are working on a CockroachDB Serverless cluster, the `kv.rangefeed.enabled` cluster setting is enabled by default. +Any created changefeeds will error until this setting is enabled. If you are working on a CockroachDB Serverless cluster, the `kv.rangefeed.enabled` cluster setting is enabled by default. -Any created changefeeds will error until this setting is enabled. Note that enabling rangefeeds currently has a small performance cost (about a 5-10% increase in latencies), whether or not the rangefeed is being used in a changefeed. +Enabling rangefeeds has a small performance cost (about a 5–10% increase in write latencies), whether or not the rangefeed is being used in a changefeed. When `kv.rangefeed.enabled` is set to `true`, a small portion of the latency cost is caused by additional write event information that is sent to the [Raft log]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft-logs) and for [replication]({% link {{ page.version.version }}/architecture/replication-layer.md %}). The remainder of the latency cost is incurred once a changefeed is running; the write event information is reconstructed and sent to an active rangefeed, which will push the event to the changefeed. For further detail on performance-related configuration, refer to the [Advanced Changefeed Confguration]({% link {{ page.version.version }}/advanced-changefeed-configuration.md %}) page. @@ -85,7 +85,11 @@ The following Enterprise and Core sections outline how to create and configure e ## Configure a changefeed -An {{ site.data.products.enterprise }} changefeed streams row-level changes in a configurable format to a configurable sink (i.e., Kafka or a cloud storage sink). You can [create](#create), [pause](#pause), [resume](#resume), and [cancel](#cancel) an {{ site.data.products.enterprise }} changefeed. For a step-by-step example connecting to a specific sink, see the [Changefeed Examples]({% link {{ page.version.version }}/changefeed-examples.md %}) page. +An {{ site.data.products.enterprise }} changefeed streams row-level changes in a [configurable format]({% link {{ page.version.version }}/changefeed-messages.md %}) to one of the following sinks: + +{% include {{ page.version.version }}/cdc/sink-list.md %} + +You can [create](#create), [pause](#pause), [resume](#resume), and [cancel](#cancel) an {{ site.data.products.enterprise }} changefeed. For a step-by-step example connecting to a specific sink, see the [Changefeed Examples]({% link {{ page.version.version }}/changefeed-examples.md %}) page. ### Create @@ -115,6 +119,8 @@ To show a list of {{ site.data.products.enterprise }} changefeed jobs: {% include {{ page.version.version }}/cdc/show-changefeed-job-retention.md %} +{% include {{ page.version.version }}/cdc/filter-show-changefeed-jobs-columns.md %} + For more information, refer to [`SHOW CHANGEFEED JOB`]({% link {{ page.version.version }}/show-jobs.md %}#show-changefeed-jobs). ### Pause @@ -182,6 +188,8 @@ For more information, see [`EXPERIMENTAL CHANGEFEED FOR`]({% link {{ page.versio {% include {{ page.version.version }}/known-limitations/cdc.md %} - {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %} - {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %} +- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %} +- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %} ## See also diff --git a/src/current/v24.2/create-changefeed.md b/src/current/v24.2/create-changefeed.md index 80ed80f391c..a1547e45b6d 100644 --- a/src/current/v24.2/create-changefeed.md +++ b/src/current/v24.2/create-changefeed.md @@ -57,7 +57,7 @@ Parameter | Description ### Sink URI -This section provides example URIs for each of the sinks that CockroachDB changefeeds support. For more comprehensive detail of using and configuring each sink, refer to the [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}) page. +To form the URI for each sink: ~~~ '{scheme}://{host}:{port}?{query_parameters}' @@ -65,80 +65,16 @@ This section provides example URIs for each of the sinks that CockroachDB change URI Component | Description -------------------+------------------------------------------------------------------ -`scheme` | The type of sink: [`kafka`](#kafka), [`gcpubsub`](#google-cloud-pub-sub), any [cloud storage sink](#cloud-storage), or [webhook sink](#webhook). +`scheme` | The type of sink, e.g., [`kafka`]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [`gcpubsub`]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub). `host` | The sink's hostname or IP address. `port` | The sink's port. `query_parameters` | The sink's [query parameters](#query-parameters). -{% include {{ page.version.version }}/cdc/sink-URI-external-connection.md %} - -#### Azure Event Hubs - -Example for an [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs) URI: - -{% include {{ page.version.version }}/cdc/azure-event-hubs-uri.md %} - -#### Cloud Storage - -The following are example file URLs for each of the cloud storage schemes: - -{% include {{ page.version.version }}/cdc/list-cloud-changefeed-uris.md %} - -For detail on authentication to cloud storage, refer to the [Cloud Storage Authentication]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) page. Refer to [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink) for considerations when using cloud storage. - -#### Confluent Cloud - -Example of a [Confluent Cloud sink]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) URI: - -~~~ -'confluent-cloud://pkc-lzvrd.us-west4.gcp.confluent.cloud:9092?api_key={API key}&api_secret={url-encoded API secret}' -~~~ - -#### Google Cloud Pub/Sub - -Example of a Google Cloud Pub/Sub sink URI: - -~~~ -'gcpubsub://{project name}?region={region}&topic_name={topic name}&AUTH=specified&CREDENTIALS={base64-encoded key}' -~~~ - -In CockroachDB v23.2 and later, the `changefeed.new_pubsub_sink_enabled` cluster setting is enabled by default, which provides improved throughput. For details on the changes to the message format, refer to [Pub/Sub sink messages]({% link {{ page.version.version }}/changefeed-sinks.md %}#pub-sub-sink-messages). - -[Use Cloud Storage for Bulk Operations]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) explains the requirements for the authentication parameter with `specified` or `implicit`. Refer to [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub) for further consideration. - -#### Kafka - -Example of a [Kafka sink]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) URI: - -~~~ -'kafka://broker.address.com:9092?topic_prefix=bar_&tls_enabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user={sasl user}&sasl_password={url-encoded password}&sasl_mechanism=SCRAM-SHA-256' -~~~ - -{{site.data.alerts.callout_info}} -{% include {{page.version.version}}/cdc/kafka-vpc-limitation.md %} -{{site.data.alerts.end}} +For more comprehensive detail of using and configuring each sink, refer to: -#### Webhook +{% include {{ page.version.version }}/cdc/sink-list.md %} -Example of a webhook URI: - -~~~ -'webhook-https://{your-webhook-endpoint}?insecure_tls_skip_verify=true' -~~~ - -Refer to [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#webhook-sink) for specifics on webhook sink configuration. - -#### Apache Pulsar - -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} - -Example for an [Apache Pulsar sink]({% link {{ page.version.version }}/changefeed-sinks.md %}#apache-pulsar) URI: - -{% include {{ page.version.version }}/cdc/apache-pulsar-uri.md %} - -Changefeeds emitting to a Pulsar sink do not support external connections or a number of changefeed options. For a full list, refer to the [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#apache-pulsar) page. +{% include {{ page.version.version }}/cdc/sink-URI-external-connection.md %} ### Query parameters @@ -148,8 +84,8 @@ Query parameters include: Parameter |
Sink Type
|
Type
| Description -------------------+-----------------------------------------------+-------------------------------------+------------------------------------------------------------ -`ASSUME_ROLE` | [Amazon S3]({% link {{ page.version.version }}/changefeed-sinks.md %}), [Google Cloud Storage](#cloud-storage), [Google Cloud Pub/Sub](#google-cloud-pub-sub) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | {% include {{ page.version.version }}/misc/assume-role-description.md %} -`AUTH` | [Amazon S3]({% link {{ page.version.version }}/changefeed-sinks.md %}), [Google Cloud Storage](#cloud-storage), [Google Cloud Pub/Sub](#google-cloud-pub-sub), [Azure Blob Storage](#cloud-storage) | The authentication parameter can define either `specified` (default) or `implicit` authentication. To use `specified` authentication, pass your [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) credentials with the URI. To use `implicit` authentication, configure these credentials via an environment variable. Refer to the [Cloud Storage Authentication page]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) page for examples of each of these. +`ASSUME_ROLE` | [Amazon S3]({% link {{ page.version.version }}/changefeed-sinks.md %}), [Google Cloud Storage]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink), [Google Cloud Pub/Sub]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | {% include {{ page.version.version }}/misc/assume-role-description.md %} +`AUTH` | [Amazon S3]({% link {{ page.version.version }}/changefeed-sinks.md %}), [Google Cloud Storage]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink), [Google Cloud Pub/Sub]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub), [Azure Blob Storage]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The authentication parameter can define either `specified` (default) or `implicit` authentication. To use `specified` authentication, pass your [Service Account](https://cloud.google.com/iam/docs/understanding-service-accounts) credentials with the URI. To use `implicit` authentication, configure these credentials via an environment variable. Refer to the [Cloud Storage Authentication page]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) page for examples of each of these. `api_key` | [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The API key created for the cluster in Confluent Cloud. `api_secret` | [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The API key's secret generated in Confluent Cloud. **Note:** This must be [URL-encoded](https://www.urlencoder.org/) before passing into the connection string. `ca_cert` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [webhook]({% link {{ page.version.version }}/changefeed-sinks.md %}#webhook-sink), [Confluent schema registry](https://docs.confluent.io/platform/current/schema-registry/index.html) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The base64-encoded `ca_cert` file. Specify `ca_cert` for a Kafka sink, webhook sink, and/or a Confluent schema registry.

For usage with a Kafka sink, see [Kafka Sink URI]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka).

It's necessary to state `https` in the schema registry's address when passing `ca_cert`:
`confluent_schema_registry='https://schema_registry:8081?ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ'`
See [`confluent_schema_registry`](#confluent-schema-registry) for more detail on using this option.

Note: To encode your `ca.cert`, run `base64 -w 0 ca.cert`. @@ -159,19 +95,22 @@ Parameter |
Sink Type
|
`insecure_tls_skip_verify` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [webhook]({% link {{ page.version.version }}/changefeed-sinks.md %}#webhook-sink) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | If `true`, disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. **Warning:** Use this query parameter with caution, as it creates [MITM](https://wikipedia.org/wiki/Man-in-the-middle_attack) vulnerabilities unless combined with another method of authentication.

**Default:** `false` `partition_format` | [cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Specify how changefeed [file paths](#general-file-format) are partitioned in cloud storage sinks. Use `partition_format` with the following values:

  • `daily` is the default behavior that organizes directories by dates (`2022-05-18/`, `2022-05-19/`, etc.).
  • `hourly` will further organize directories by hour within each date directory (`2022-05-18/06`, `2022-05-18/07`, etc.).
  • `flat` will not partition the files at all.

For example: `CREATE CHANGEFEED FOR TABLE users INTO 'gs://...?AUTH...&partition_format=hourly'`

**Default:** `daily` `S3_STORAGE_CLASS` | [Amazon S3 cloud storage sink]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-s3) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Specify the Amazon S3 storage class for files created by the changefeed. See [Create a changefeed with an S3 storage class](#create-a-changefeed-with-an-s3-storage-class) for the available classes and an example.

**Default:** `STANDARD` +New in v24.2:`sasl_aws_iam_role_arn` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The ARN for the IAM role that has the permissions to create a topic and send data to the topic. For more details on setting up an Amazon MSK cluster with an IAM role, refer to [the AWS documentation](https://docs.aws.amazon.com/msk/latest/developerguide/serverless-getting-started.html). +New in v24.2:`sasl_aws_iam_session_name` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The user-specified string that identifies the session in AWS. +New in v24.2:`sasl_aws_region` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The region of the Amazon MSK cluster. `sasl_client_id` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Client ID for OAuth authentication from a third-party provider. This parameter is only applicable with `sasl_mechanism=OAUTHBEARER`. `sasl_client_secret` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Client secret for OAuth authentication from a third-party provider. This parameter is only applicable with `sasl_mechanism=OAUTHBEARER`. **Note:** You must [base64 encode](https://www.base64encode.org/) this value when passing it in as part of a sink URI. -`sasl_enabled` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | If `true`, the authentication protocol can be set to `SCRAM` or `PLAIN` using the `sasl_mechanism` parameter. You must have `tls_enabled` set to `true` to use SASL.

For Confluent Cloud and Azure Event Hubs sinks, this is set to `true` by default.

**Default:** `false` +`sasl_enabled` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk), [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | If `true`, set the authentication protocol with the [`sasl_mechanism`](#sasl-mechanism) parameter. You must have `tls_enabled` set to `true` to use SASL.

For Confluent Cloud and Azure Event Hubs sinks, this is set to `true` by default.

**Default:** `false` `sasl_grant_type` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Override the default OAuth client credentials grant type for other implementations. This parameter is only applicable with `sasl_mechanism=OAUTHBEARER`. `sasl_handshake` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | For Confluent Cloud and Azure Event Hubs sinks, this is set to `true` by default. -`sasl_mechanism` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Can be set to [`OAUTHBEARER`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_oauth.html), [`SCRAM-SHA-256`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), [`SCRAM-SHA-512`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), or [`PLAIN`](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_plain.html). A `sasl_user` and `sasl_password` are required.

See the [Connect to a Changefeed Kafka sink with OAuth Using Okta](connect-to-a-changefeed-kafka-sink-with-oauth-using-okta.html) tutorial for detail setting up OAuth using Okta.

For Confluent Cloud and Azure Event Hubs sinks, this is set to `PLAIN` by default.

**Default:** `PLAIN` +`sasl_mechanism` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk), [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Can be set to [`OAUTHBEARER`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_oauth.html), [`SCRAM-SHA-256`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), [`SCRAM-SHA-512`](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html), or [`PLAIN`](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_plain.html). A `sasl_user` and `sasl_password` are required for `PLAIN` and `SCRAM` authentication.

For Amazon MSK clusters, set to [`AWS_MSK_IAM`]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk). [`sasl_aws_iam_role_arn`](#sasl-aws-iam-role-arn), [`sasl_aws_iam_session_name`](#sasl-aws-iam-session-name), and [`sasl_aws_region`](#sasl-aws-region) are also required in the sink uri.

Refer to the [Connect to a Changefeed Kafka sink with OAuth Using Okta](connect-to-a-changefeed-kafka-sink-with-oauth-using-okta.html) tutorial for detail setting up OAuth using Okta.

For Confluent Cloud and Azure Event Hubs sinks, `sasl_mechanism=PLAIN` is required but set automatically by CockroachDB.

**Default:** `PLAIN` `sasl_scopes` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | A list of scopes that the OAuth token should have access for. This parameter is only applicable with `sasl_mechanism=OAUTHBEARER`. `sasl_token_url` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Client token URL for OAuth authentication from a third-party provider. **Note:** You must [URL encode](https://www.urlencoder.org/) this value before passing in a URI. This parameter is only applicable with `sasl_mechanism=OAUTHBEARER`. -`sasl_user` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Your SASL username. -`sasl_password` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Your SASL password. **Note:** Passwords should be [URL encoded](https://wikipedia.org/wiki/Percent-encoding) since the value can contain characters that would cause authentication to fail. +`sasl_user` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Your SASL username. +`sasl_password` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Your SASL password. **Note:** Passwords should be [URL encoded](https://wikipedia.org/wiki/Percent-encoding) since the value can contain characters that would cause authentication to fail. `shared_access_key` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The URL-encoded key for your Event Hub shared access policy. `shared_access_key_name` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The name of your Event Hub shared access policy. -`tls_enabled` | [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | If `true`, enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a `ca_cert` (see below).

For Confluent Cloud and Azure Event Hubs sinks, this is set to `true` by default.

**Default:** `false` +`tls_enabled` | [Amazon MSK]({% link {{ page.version.version }}/changefeed-sinks.md %}#amazon-msk), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | If `true`, enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a `ca_cert` (see below).

For Confluent Cloud and Azure Event Hubs sinks, this is set to `true` by default.

**Default:** `false` `topic_name` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud), [GC Pub/Sub]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Allows arbitrary topic naming for Kafka and GC Pub/Sub topics. See the [Kafka topic naming limitations]({% link {{ page.version.version }}/changefeed-sinks.md %}#topic-naming) or [GC Pub/Sub topic naming]({% link {{ page.version.version }}/changefeed-sinks.md %}#pub-sub-topic-naming) for detail on supported characters etc.

For example, `CREATE CHANGEFEED FOR foo,bar INTO 'kafka://sink?topic_name=all'` will emit all records to a topic named `all`. Note that schemas will still be registered separately. When using Kafka, this parameter can be combined with the [`topic_prefix` parameter](#topic-prefix) (this is not supported for GC Pub/Sub).

**Default:** table name. `topic_prefix` | [Azure Event Hubs]({% link {{ page.version.version }}/changefeed-sinks.md %}#azure-event-hubs), [Kafka]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka), [Confluent Cloud]({% link {{ page.version.version }}/changefeed-sinks.md %}#confluent-cloud) | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Adds a prefix to all topic names.

For example, `CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://...?topic_prefix=bar_'` would emit rows under the topic `bar_foo` instead of `foo`. @@ -192,19 +131,19 @@ Option | Value | Description `full_table_name` | N/A | Use fully qualified table name in topics, subjects, schemas, and record output instead of the default table name. This can prevent unintended behavior when the same table name is present in multiple databases.

**Note:** This option cannot modify existing table names used as topics, subjects, etc., as part of an [`ALTER CHANGEFEED`]({% link {{ page.version.version }}/alter-changefeed.md %}) statement. To modify a topic, subject, etc., to use a fully qualified table name, create a new changefeed with this option.

Example: `CREATE CHANGEFEED FOR foo... WITH full_table_name` will create the topic name `defaultdb.public.foo` instead of `foo`. `gc_protect_expires_after` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Automatically expires protected timestamp records that are older than the defined duration. In the case where a changefeed job remains paused, `gc_protect_expires_after` will trigger the underlying protected timestamp record to expire and cancel the changefeed job to prevent accumulation of protected data.

Refer to [Protect-Changefeed-Data-from-Garbage-Collection]({% link {{ page.version.version }}/protect-changefeed-data.md %}) for more detail on protecting changefeed data. `ignore_disable_changefeed_replication` | [`BOOL`]({% link {{ page.version.version }}/bool.md %}) | When set to `true`, the changefeed **will emit** events even if CDC filtering for TTL jobs is configured using the `disable_changefeed_replication` [session variable]({% link {{ page.version.version }}/set-vars.md %}), `sql.ttl.changefeed_replication.disabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}), or the `ttl_disable_changefeed_replication` [table storage parameter]({% link {{ page.version.version }}/row-level-ttl.md %}).

Refer to [Filter changefeeds for tables using TTL](#filter-changefeeds-for-tables-using-row-level-ttl) for usage details. -`initial_scan` | `yes`/`no`/`only` | Control whether or not an initial scan will occur at the start time of a changefeed. Only one `initial_scan` option (`yes`, `no`, or `only`) can be used. If none of these are set, an initial scan will occur if there is no [`cursor`](#cursor), and will not occur if there is one. This preserves the behavior from previous releases. With `initial_scan = 'only'` set, the changefeed job will end with a successful status (`succeeded`) after the initial scan completes. You cannot specify `yes`, `no`, `only` simultaneously.

If used in conjunction with `cursor`, an initial scan will be performed at the cursor timestamp. If no `cursor` is specified, the initial scan is performed at `now()`.

Although the [`initial_scan` / `no_initial_scan`](https://www.cockroachlabs.com/docs/v21.2/create-changefeed#initial-scan) syntax from previous versions is still supported, you cannot combine the previous and current syntax.

Default: `initial_scan = 'yes'` +`initial_scan` | `yes`/`no`/`only` | Control whether or not an initial scan will occur at the start time of a changefeed. Only one `initial_scan` option (`yes`, `no`, or `only`) can be used. If none of these are set, an initial scan will occur if there is no [`cursor`](#cursor), and will not occur if there is one. This preserves the behavior from previous releases. With `initial_scan = 'only'` set, the changefeed job will end with a successful status (`succeeded`) after the initial scan completes. You cannot specify `yes`, `no`, `only` simultaneously.

If used in conjunction with `cursor`, an initial scan will be performed at the cursor timestamp. If no `cursor` is specified, the initial scan is performed at `now()`.

Although the [`initial_scan` / `no_initial_scan`]({% link v21.2/create-changefeed.md %}#initial-scan) syntax from previous versions is still supported, you cannot combine the previous and current syntax.

Default: `initial_scan = 'yes'` `kafka_sink_config` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Set fields to configure the required level of message acknowledgement from the Kafka server, the version of the server, and batching parameters for Kafka sinks. Set the message file compression type. See [Kafka sink configuration]({% link {{ page.version.version }}/changefeed-sinks.md %}#kafka-sink-configuration) for more detail on configuring all the available fields for this option.

Example: `CREATE CHANGEFEED FOR table INTO 'kafka://localhost:9092' WITH kafka_sink_config='{"Flush": {"MaxMessages": 1, "Frequency": "1s"}, "RequiredAcks": "ONE"}'` `key_column` | `'column'` | Override the key used in [message metadata]({% link {{ page.version.version }}/changefeed-messages.md %}). This changes the key hashed to determine downstream partitions. In sinks that support partitioning by message, CockroachDB uses the [32-bit FNV-1a](https://wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function) hashing algorithm to determine which partition to send to.

**Note:** `key_column` does not preserve ordering of messages from CockroachDB to the downstream sink, therefore you must also include the [`unordered`](#unordered) option in your changefeed creation statement. It does not affect per-key [ordering guarantees]({% link {{ page.version.version }}/changefeed-messages.md %}#ordering-and-delivery-guarantees) or the output of [`key_in_value`](#key-in-value).

See the [Define a key to determine the changefeed sink partition](#define-a-key-to-determine-the-changefeed-sink-partition) example. `key_in_value` | N/A | Add a primary key array to the emitted message. This makes the [primary key]({% link {{ page.version.version }}/primary-key.md %}) of a deleted row recoverable in sinks where each message has a value but not a key (most have a key and value in each message). `key_in_value` is automatically used for [cloud storage sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#cloud-storage-sink), [webhook sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#webhook-sink), and [GC Pub/Sub sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}#google-cloud-pub-sub). `lagging_ranges_threshold` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Set a duration from the present that determines the length of time a range is considered to be lagging behind, which will then track in the [`lagging_ranges`]({% link {{ page.version.version }}/monitor-and-debug-changefeeds.md %}#lagging-ranges-metric) metric. Note that ranges undergoing an [initial scan](#initial-scan) for longer than the threshold duration are considered to be lagging. Starting a changefeed with an initial scan on a large table will likely increment the metric for each range in the table. As ranges complete the initial scan, the number of ranges lagging behind will decrease.

**Default:** `3m` `lagging_ranges_polling_interval` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Set the interval rate for when lagging ranges are checked and the `lagging_ranges` metric is updated. Polling adds latency to the `lagging_ranges` metric being updated. For example, if a range falls behind by 3 minutes, the metric may not update until an additional minute afterward.

**Default:** `1m` `metrics_label` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Define a metrics label to which the metrics for one or multiple changefeeds increment. All changefeeds also have their metrics aggregated.

The maximum length of a label is 128 bytes. There is a limit of 1024 unique labels.

`WITH metrics_label=label_name`

For more detail on usage and considerations, see [Using changefeed metrics labels]({% link {{ page.version.version }}/monitor-and-debug-changefeeds.md %}#using-changefeed-metrics-labels). -`min_checkpoint_frequency` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Controls how often nodes flush their progress to the [coordinating changefeed node]({% link {{ page.version.version }}/how-does-an-enterprise-changefeed-work.md %}). Changefeeds will wait for at least the specified duration before a flush to the sink. This can help you control the flush frequency of higher latency sinks to achieve better throughput. If this is set to `0s`, a node will flush as long as the high-water mark has increased for the ranges that particular node is processing. If a changefeed is resumed, then `min_checkpoint_frequency` is the amount of time that changefeed will need to catch up. That is, it could emit duplicate messages during this time.

**Note:** [`resolved`](#resolved) messages will not be emitted more frequently than the configured `min_checkpoint_frequency` (but may be emitted less frequently). Since `min_checkpoint_frequency` defaults to `30s`, you **must** configure `min_checkpoint_frequency` to at least the desired `resolved` message frequency if you require `resolved` messages more frequently than `30s`.

**Default:** `30s` +`min_checkpoint_frequency` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Controls how often nodes flush their progress to the [coordinating changefeed node]({% link {{ page.version.version }}/how-does-an-enterprise-changefeed-work.md %}). Changefeeds will wait for at least the specified duration before a flush to the sink. This can help you control the flush frequency of higher latency sinks to achieve better throughput. However, more frequent checkpointing can increase CPU usage. If this is set to `0s`, a node will flush messages as long as the high-water mark has increased for the ranges that particular node is processing. If a changefeed is resumed, then `min_checkpoint_frequency` is the amount of time that changefeed will need to catch up. That is, it could emit [duplicate messages]({% link {{ page.version.version }}/changefeed-messages.md %}#duplicate-messages) during this time.

**Note:** [`resolved`](#resolved) messages will not be emitted more frequently than the configured `min_checkpoint_frequency` (but may be emitted less frequently). If you require `resolved` messages more frequently than `30s`, you must configure `min_checkpoint_frequency` to at least the desired `resolved` message frequency. For more details, refer to [Resolved message frequency]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-timestamp-frequency).

**Default:** `30s` `mvcc_timestamp` | N/A | Include the [MVCC]({% link {{ page.version.version }}/architecture/storage-layer.md %}#mvcc) timestamp for each emitted row in a changefeed. With the `mvcc_timestamp` option, each emitted row will always contain its MVCC timestamp, even during the changefeed's initial backfill. `on_error` | `pause` / `fail` | Use `on_error=pause` to pause the changefeed when encountering **non**-retryable errors. `on_error=pause` will pause the changefeed instead of sending it into a terminal failure state. **Note:** Retryable errors will continue to be retried with this option specified.

Use with [`protect_data_from_gc_on_pause`](#protect-data-from-gc-on-pause) to protect changes from [garbage collection]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds).

If a changefeed with `on_error=pause` is running when a watched table is [truncated]({% link {{ page.version.version }}/truncate.md %}), the changefeed will pause but will not be able to resume reads from that table. Using [`ALTER CHANGEFEED`]({% link {{ page.version.version }}/alter-changefeed.md %}) to drop the table from the changefeed and then [resuming the job]({% link {{ page.version.version }}/resume-job.md %}) will work, but you cannot add the same table to the changefeed again. Instead, you will need to [create a new changefeed](#start-a-new-changefeed-where-another-ended) for that table.

Default: `on_error=fail` `protect_data_from_gc_on_pause` | N/A | This option is deprecated as of v23.2 and will be removed in a future release.

When a [changefeed is paused]({% link {{ page.version.version }}/pause-job.md %}), ensure that the data needed to [resume the changefeed]({% link {{ page.version.version }}/resume-job.md %}) is not garbage collected. If `protect_data_from_gc_on_pause` is **unset**, pausing the changefeed will release the existing protected timestamp records. It is also important to note that pausing and adding `protect_data_from_gc_on_pause` to a changefeed will not protect data if the [garbage collection]({% link {{ page.version.version }}/configure-replication-zones.md %}#gc-ttlseconds) window has already passed.

Use with [`on_error=pause`](#on-error) to protect changes from garbage collection when encountering non-retryable errors.

Refer to [Protect Changefeed Data from Garbage Collection]({% link {{ page.version.version }}/protect-changefeed-data.md %}) for more detail on protecting changefeed data.

**Note:** If you use this option, changefeeds that are left paused for long periods of time can prevent garbage collection. Use with the [`gc_protect_expires_after`](#gc-protect-expires-after) option to set a limit for protected data and for how long a changefeed will remain paused. `pubsub_sink_config` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | Set fields to configure sink batching and retries. The schema is as follows:

`{ "Flush": { "Messages": ..., "Bytes": ..., "Frequency": ..., }, "Retry": {"Max": ..., "Backoff": ..., } }`.

**Note** that if either `Messages` or `Bytes` are nonzero, then a non-zero value for `Frequency` must be provided.

Refer to [Pub/Sub sink configuration]({% link {{ page.version.version }}/changefeed-sinks.md %}#pub-sub-sink-configuration) for more details on using this option. -`resolved` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Emits [resolved timestamp]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-messages) events per changefeed in a format dependent on the connected sink. Resolved timestamp events do not emit until all ranges in the changefeed have progressed to a specific point in time.

Set an optional minimal duration between emitting resolved timestamps. Example: `resolved='10s'`. This option will **only** emit a resolved timestamp event if the timestamp has advanced and at least the optional duration has elapsed. If a duration is unspecified, all resolved timestamps are emitted as the high-water mark advances.

**Note:** If you set `resolved` lower than `30s`, then you **must** also set [`min_checkpoint_frequency`](#min-checkpoint-frequency) to at minimum the same value as `resolved`, because `resolved` messages may be emitted less frequently than `min_checkpoint_frequency`, but cannot be emitted more frequently.

Refer to [Resolved messages]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-messages) for more detail. +`resolved` | [Duration string](https://pkg.go.dev/time#ParseDuration) | Emits [resolved timestamp]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-messages) events per changefeed in a format dependent on the connected sink. Resolved timestamp events do not emit until the changefeed job has reached a checkpoint.

Set an optional minimal duration between emitting resolved timestamps. Example: `resolved='10s'`. This option will **only** emit a resolved timestamp event if the timestamp has advanced and at least the optional duration has elapsed. If a duration is unspecified, all resolved timestamps are emitted as the high-water mark advances.

**Note:** If you set `resolved` lower than `30s`, then you **must** also set [`min_checkpoint_frequency`](#min-checkpoint-frequency) to at minimum the same value as `resolved`, because `resolved` messages may be emitted less frequently than `min_checkpoint_frequency`, but cannot be emitted more frequently.

Refer to [Resolved messages]({% link {{ page.version.version }}/changefeed-messages.md %}#resolved-messages) for more detail. `schema_change_events` | `default` / `column_changes` | The type of schema change event that triggers the behavior specified by the `schema_change_policy` option:
  • `default`: Include all [`ADD COLUMN`]({% link {{ page.version.version }}/alter-table.md %}#add-column) events for columns that have a non-`NULL` [`DEFAULT` value]({% link {{ page.version.version }}/default-value.md %}) or are [computed]({% link {{ page.version.version }}/computed-columns.md %}), and all [`DROP COLUMN`]({% link {{ page.version.version }}/alter-table.md %}#drop-column) events.
  • `column_changes`: Include all schema change events that add or remove any column.

Default: `schema_change_events=default` `schema_change_policy` | `backfill` / `nobackfill` / `stop` | The behavior to take when an event specified by the `schema_change_events` option occurs:
  • `backfill`: When [schema changes with column backfill]({% link {{ page.version.version }}/changefeed-messages.md %}#schema-changes-with-column-backfill) are finished, output all watched rows using the new schema.
  • `nobackfill`: For [schema changes with column backfill]({% link {{ page.version.version }}/changefeed-messages.md %}#schema-changes-with-column-backfill), perform no logical backfills. The changefeed will not emit any messages about the schema change.
  • `stop`: For [schema changes with column backfill]({% link {{ page.version.version }}/changefeed-messages.md %}#schema-changes-with-column-backfill), wait for all data preceding the schema change to be resolved before exiting with an error indicating the timestamp at which the schema change occurred. An `error: schema change occurred at ` will display in the `cockroach.log` file.

Default: `schema_change_policy=backfill` `split_column_families` | N/A | Use this option to create a changefeed on a table with multiple [column families]({% link {{ page.version.version }}/column-families.md %}). The changefeed will emit messages for each of the table's column families. See [Changefeeds on tables with column families]({% link {{ page.version.version }}/changefeeds-on-tables-with-column-families.md %}) for more usage detail. @@ -280,13 +219,6 @@ The following examples show the syntax for managing changefeeds and starting cha ### Create a changefeed connected to a sink -You can connect a changefeed to the following sinks: - -- Kafka -- Cloud storage / HTTP -- Google Cloud Pub/Sub -- Webhook - {% include_cached copy-clipboard.html %} ~~~ sql CREATE CHANGEFEED FOR TABLE table_name, table_name2, table_name3 @@ -294,10 +226,9 @@ CREATE CHANGEFEED FOR TABLE table_name, table_name2, table_name3 WITH updated, resolved; ~~~ -For guidance on the sink URI, refer to: +You can connect a changefeed to the following sinks: -- The [Changefeed Sinks]({% link {{ page.version.version }}/changefeed-sinks.md %}) page for general detail on query parameters and sink configuration. -- The [Cloud Storage Authentication]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) page for instructions on setting up each supported cloud storage authentication. +{% include {{ page.version.version }}/cdc/sink-list.md %} ### Create a changefeed that filters and transforms change data @@ -366,6 +297,24 @@ CREATE CHANGEFEED FOR TABLE table_name INTO 'external://kafka_sink' For guidance on how to filter changefeed messages to emit [row-level TTL]({% link {{ page.version.version }}/row-level-ttl.md %}) deletes only, refer to [Change Data Capture Queries]({% link {{ page.version.version }}/cdc-queries.md %}#reference-ttl-in-a-cdc-query). +### Disallow schema changes on tables to improve changefeed performance + +Use the `schema_locked` [storage parameter]({% link {{ page.version.version }}/with-storage-parameter.md %}) to disallow [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) on a watched table, which helps to decrease the latency between a write committing to a table and it emitting to the [changefeed's sink]({% link {{ page.version.version }}/changefeed-sinks.md %}). You can lock the table before creating a changefeed or while a changefeed is running, which will enable the performance improvement for changefeeds watching the particular table. + +Enable `schema_locked` on the watched table with the [`ALTER TABLE`]({% link {{ page.version.version }}/alter-table.md %}) statement: + +{% include_cached copy-clipboard.html %} +~~~ sql +ALTER TABLE watched_table SET (schema_locked = true); +~~~ + +While `schema_locked` is enabled on a table, attempted schema changes on the table will be rejected and an error returned. If you need to run a schema change on the locked table, unlock the table with `schema_locked = false`, complete the schema change, and then lock the table again with `schema_locked = true`. The changefeed will run as normal while `schema_locked = false`, but it will not benefit from the performance optimization. + +{% include_cached copy-clipboard.html %} +~~~ sql +ALTER TABLE watched_table SET (schema_locked = false); +~~~ + ### Manage a changefeed For {{ site.data.products.enterprise }} changefeeds, use [`SHOW CHANGEFEED JOBS`]({% link {{ page.version.version }}/show-jobs.md %}) to check the status of your changefeed jobs: diff --git a/src/current/v24.2/create-index.md b/src/current/v24.2/create-index.md index 04e69b30671..1818314ac31 100644 --- a/src/current/v24.2/create-index.md +++ b/src/current/v24.2/create-index.md @@ -53,7 +53,7 @@ Parameter | Description `STORING ...`| Store (but do not sort) each column whose name you include.

For information on when to use `STORING`, see [Store Columns](#store-columns). Note that columns that are part of a table's [`PRIMARY KEY`]({% link {{ page.version.version }}/primary-key.md %}) cannot be specified as `STORING` columns in secondary indexes on the table.

`COVERING` and `INCLUDE` are aliases for `STORING` and work identically. `opt_partition_by` | An [Enterprise-only]({% link {{ page.version.version }}/enterprise-licensing.md %}) option that lets you [define index partitions at the row level]({% link {{ page.version.version }}/partitioning.md %}). As of CockroachDB v21.1 and later, most users should use [`REGIONAL BY ROW` tables]({% link {{ page.version.version }}/table-localities.md %}#regional-by-row-tables). Indexes against regional by row tables are automatically partitioned, so explicit index partitioning is not required. `opt_where_clause` | An optional `WHERE` clause that defines the predicate boolean expression of a [partial index]({% link {{ page.version.version }}/partial-indexes.md %}). -`opt_index_visible` | An optional `VISIBLE` or `NOT VISIBLE` clause that indicates whether an index is visible to the [cost-based optimizer]({% link {{ page.version.version }}/cost-based-optimizer.md %}#control-whether-the-optimizer-uses-an-index). If `NOT VISIBLE`, the index will not be used in queries unless it is specifically selected with an [index hint]({% link {{ page.version.version }}/indexes.md %}#selection) or the property is overridden with the [`optimizer_use_not_visible_indexes` session variable]({% link {{ page.version.version }}/set-vars.md %}#optimizer-use-not-visible-indexes). For an example, see [Set an index to be not visible]({% link {{ page.version.version }}/alter-index.md %}#set-an-index-to-be-not-visible).

Indexes that are not visible are still used to enforce `UNIQUE` and `FOREIGN KEY` [constraints]({% link {{ page.version.version }}/constraints.md %}). For more considerations, see [Index visibility considerations](alter-index.html#not-visible). +`opt_index_visible` | An optional `VISIBLE`, `NOT VISIBLE`, or `VISIBILITY` clause that indicates that an [index is visible, not visible, or partially visible to the cost-based optimizer]({% link {{ page.version.version }}/cost-based-optimizer.md %}#control-whether-the-optimizer-uses-an-index). If not visible, the index will not be used in queries unless it is specifically selected with an [index hint]({% link {{ page.version.version }}/indexes.md %}#selection) or the property is overridden with the [`optimizer_use_not_visible_indexes` session variable]({% link {{ page.version.version }}/set-vars.md %}#optimizer-use-not-visible-indexes). For examples, see [Set index visibility]({% link {{ page.version.version }}/alter-index.md %}#set-index-visibility).

Indexes that are not visible are still used to enforce `UNIQUE` and `FOREIGN KEY` [constraints]({% link {{ page.version.version }}/constraints.md %}). For more considerations, see [Index visibility considerations](alter-index.html#not-visible). `USING HASH` | Creates a [hash-sharded index]({% link {{ page.version.version }}/hash-sharded-indexes.md %}). `WITH storage_parameter` | A comma-separated list of [spatial index tuning parameters]({% link {{ page.version.version }}/spatial-indexes.md %}#index-tuning-parameters). Supported parameters include `fillfactor`, `s2_max_level`, `s2_level_mod`, `s2_max_cells`, `geometry_min_x`, `geometry_max_x`, `geometry_min_y`, and `geometry_max_y`. The `fillfactor` parameter is a no-op, allowed for PostgreSQL-compatibility.

For details, see [Spatial index tuning parameters]({% link {{ page.version.version }}/spatial-indexes.md %}#index-tuning-parameters). For an example, see [Create a spatial index that uses all of the tuning parameters]({% link {{ page.version.version }}/spatial-indexes.md %}#create-a-spatial-index-that-uses-all-of-the-tuning-parameters). `CONCURRENTLY` | Optional, no-op syntax for PostgreSQL compatibility. All indexes are created concurrently in CockroachDB. diff --git a/src/current/v24.2/create-statistics.md b/src/current/v24.2/create-statistics.md index 4e32438c6ab..1dfa622c46b 100644 --- a/src/current/v24.2/create-statistics.md +++ b/src/current/v24.2/create-statistics.md @@ -213,6 +213,10 @@ To view statistics jobs, there are two options: (6 rows) ~~~ +## Known limitations + +{% include {{ page.version.version }}/known-limitations/create-statistics-aost-limitation.md %} + ## See also - [Cost-Based Optimizer]({% link {{ page.version.version }}/cost-based-optimizer.md %}) diff --git a/src/current/v24.2/cutover-replication.md b/src/current/v24.2/cutover-replication.md index b9e2e1c0d11..fddeca50e82 100644 --- a/src/current/v24.2/cutover-replication.md +++ b/src/current/v24.2/cutover-replication.md @@ -283,10 +283,6 @@ This section illustrates the steps to cut back to the original primary cluster f ALTER VIRTUAL CLUSTER {cluster_a} COMPLETE REPLICATION TO LATEST; ~~~ - {{site.data.alerts.callout_danger}} - {% include {{ page.version.version }}/physical-replication/fast-cutback-latest-timestamp.md %} - {{site.data.alerts.end}} - The `cutover_time` is the timestamp at which the replicated data is consistent. The cluster will revert any data above this timestamp: ~~~ diff --git a/src/current/v24.2/data-types.md b/src/current/v24.2/data-types.md index 634b247591a..ab14f2bee51 100644 --- a/src/current/v24.2/data-types.md +++ b/src/current/v24.2/data-types.md @@ -33,6 +33,7 @@ Type | Description | Example [`TSQUERY`]({% link {{ page.version.version }}/tsquery.md %}) | A list of lexemes and operators used in [full-text search]({% link {{ page.version.version }}/full-text-search.md %}). | `'list' & 'lexem' & 'oper' & 'use' & 'full' & 'text' & 'search'` [`TSVECTOR`]({% link {{ page.version.version }}/tsvector.md %}) | A list of lexemes with optional integer positions and weights used in [full-text search]({% link {{ page.version.version }}/full-text-search.md %}). | `'full':13 'integ':7 'lexem':4 'list':2 'option':6 'posit':8 'search':15 'text':14 'use':11 'weight':10` [`UUID`]({% link {{ page.version.version }}/uuid.md %}) | A 128-bit hexadecimal value. | `7f9c24e8-3b12-4fef-91e0-56a2d5a246ec` +[`VECTOR`]({% link {{ page.version.version }}/vector.md %}) | A fixed-length array of floating-point numbers. | `[1.0, 0.0, 0.0]` ## Data type conversions and casts diff --git a/src/current/v24.2/datadog.md b/src/current/v24.2/datadog.md index 4b47a2203da..4f76cede214 100644 --- a/src/current/v24.2/datadog.md +++ b/src/current/v24.2/datadog.md @@ -8,7 +8,7 @@ docs_area: manage [Datadog](https://www.datadoghq.com/) is a monitoring and security platform for cloud applications. The CockroachDB {{ site.data.products.core }} integration with Datadog enables data collection and alerting on selected [CockroachDB metrics](https://docs.datadoghq.com/integrations/cockroachdb/?tab=host#data-collected) using the Datadog platform. {{site.data.alerts.callout_success}} -This tutorial explores the CockroachDB {{ site.data.products.core }} integration with Datadog. For the CockroachDB {{ site.data.products.dedicated }} integration with Datadog, refer to [Monitor CockroachDB Dedicated with Datadog](https://www.cockroachlabs.com/docs/cockroachcloud/tools-page#monitor-cockroachdb-dedicated-with-datadog) instead of this page. +This tutorial explores the CockroachDB {{ site.data.products.core }} integration with Datadog. For the CockroachDB {{ site.data.products.dedicated }} integration with Datadog, refer to [Monitor CockroachDB Dedicated with Datadog]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-dedicated-with-datadog) instead of this page. {{site.data.alerts.end}} The CockroachDB {{ site.data.products.core }} integration with Datadog is powered by the [Datadog Agent](https://app.datadoghq.com/account/settings#agent), and supported by Datadog directly: diff --git a/src/current/v24.2/debezium.md b/src/current/v24.2/debezium.md index d35bca04a5f..ea2eb513b50 100644 --- a/src/current/v24.2/debezium.md +++ b/src/current/v24.2/debezium.md @@ -28,7 +28,7 @@ Migrating with Debezium requires familiarity with Kafka. Refer to the [Debezium Complete the following items before using Debezium: -- Configure a secure [publicly-accessible]({% link cockroachcloud/network-authorization.md %}) CockroachDB cluster running the latest **{{ page.version.version }}** [production release](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}) with at least one [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users), make a note of the credentials for the SQL user. +- Configure a secure [publicly-accessible]({% link cockroachcloud/network-authorization.md %}) CockroachDB cluster running the latest **{{ page.version.version }}** [production release]({% link releases/{{ page.version.version }}.md %}) with at least one [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users), make a note of the credentials for the SQL user. - Install and configure [Debezium](https://debezium.io/), [Kafka Connect](https://docs.confluent.io/platform/current/connect/index.html), and [Kafka](https://kafka.apache.org/). ## Migrate data to CockroachDB @@ -117,7 +117,7 @@ Once all of the [prerequisite steps](#before-you-begin) are completed, you can u ## See also - [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) -- [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %}) - [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %}) - [Stream a Changefeed to a Confluent Cloud Kafka Cluster]({% link {{ page.version.version }}/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md %}) diff --git a/src/current/v24.2/delete-data.md b/src/current/v24.2/delete-data.md index 1a056e6685d..0625817841f 100644 --- a/src/current/v24.2/delete-data.md +++ b/src/current/v24.2/delete-data.md @@ -11,7 +11,7 @@ This page has instructions for deleting rows of data from CockroachDB, using the Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %}). - [Connect to the database]({% link {{ page.version.version }}/connect-to-the-database.md %}). - [Create a database schema]({% link {{ page.version.version }}/schema-design-overview.md %}). diff --git a/src/current/v24.2/demo-serializable.md b/src/current/v24.2/demo-serializable.md index 65e0701fdd0..1d837db840d 100644 --- a/src/current/v24.2/demo-serializable.md +++ b/src/current/v24.2/demo-serializable.md @@ -456,7 +456,7 @@ When you repeat the scenario on CockroachDB, you'll see that the anomaly is prev ~~~ ERROR: restart transaction: TransactionRetryWithProtoRefreshError: TransactionRetryError: retry txn (RETRY_SERIALIZABLE - failed preemptive refresh due to encountered recently written committed value /Table/105/1/20001/1/0 @1700513356.063385000,2): "sql txn" meta={id=10f4abbc key=/Table/105/1/20001/2/0 iso=Serializable pri=0.00167708 epo=0 ts=1700513366.194063000,2 min=1700513327.262632000,0 seq=1} lock=true stat=PENDING rts=1700513327.262632000,0 wto=false gul=1700513327.762632000,0 SQLSTATE: 40001 - HINT: See: https://www.cockroachlabs.com/docs/v23.2/transaction-retry-error-reference.html#retry_serializable + HINT: See: https://www.cockroachlabs.com/docs/{{ page.version.version }}/transaction-retry-error-reference.html#retry_serializable ~~~ {{site.data.alerts.callout_success}} @@ -546,4 +546,4 @@ You might also want to learn more about how transactions work in CockroachDB and - [Transactions Overview]({% link {{ page.version.version }}/transactions.md %}) - [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/) -- [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}) \ No newline at end of file +- [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}) diff --git a/src/current/v24.2/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md b/src/current/v24.2/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md index 026a53cfd12..0210883d4ae 100644 --- a/src/current/v24.2/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md +++ b/src/current/v24.2/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md @@ -4,7 +4,7 @@ summary: Learn how metrics can differ between third-party monitoring tools that toc: true --- -When using [Third-Party Monitoring Integrations]({% link {{ page.version.version }}/third-party-monitoring-tools.md %}), such as the [metrics export feature](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics), discrepancies may be seen when comparing those metrics charts to ones found on the [Metrics dashboards]({% link {{ page.version.version }}/ui-overview.md %}#metrics) or [custom charts]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}) of the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}). This page explains why these different systems may yield different results. +When using [Third-Party Monitoring Integrations]({% link {{ page.version.version }}/third-party-monitoring-tools.md %}), such as the [metrics export feature]({% link cockroachcloud/export-metrics.md %}), discrepancies may be seen when comparing those metrics charts to ones found on the [Metrics dashboards]({% link {{ page.version.version }}/ui-overview.md %}#metrics) or [custom charts]({% link {{ page.version.version }}/ui-custom-chart-debug-page.md %}) of the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}). This page explains why these different systems may yield different results. ## CockroachDB’s Timeseries Database @@ -33,7 +33,7 @@ Datadog scrapes every 60s | 0 | - | - | - | - | - | 0 Since Cockroach Labs does not own the third-party systems, we can not be expected to have intimate knowledge about how each system’s different query language and timeseries database works. -The [metrics export feature](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics) scrapes the `/_status/vars` endpoint every 30 seconds, and forwards the data along to the third-party system. The metrics export does no intermediate aggregation, downsampling, or modification of the timeseries values at any point. The raw metrics export data is at a 30-second resolution, but how that data is processed once received by the third party system is unknown to us. +The [metrics export feature]({% link cockroachcloud/export-metrics.md %}) scrapes the `/_status/vars` endpoint every 30 seconds, and forwards the data along to the third-party system. The metrics export does no intermediate aggregation, downsampling, or modification of the timeseries values at any point. The raw metrics export data is at a 30-second resolution, but how that data is processed once received by the third party system is unknown to us. It is within our scope to understand and support our own timeseries database. If you have problems receiving metrics in your third-party system, [our support]({% link {{ page.version.version }}/support-resources.md %}) can help troubleshoot those problems. However, once the data is ingested into the third-party system, please contact your representative at that third-party company to support issues found on those systems. For example, assuming the raw metric data has been ingested as expected, Cockroach Labs does not support writing queries in third-party systems, such as using Datadog's Metrics Explorer or Datadog Query Language (DQL). @@ -43,4 +43,4 @@ It is within our scope to understand and support our own timeseries database. If - [DB Console Overview]({% link {{ page.version.version }}/ui-overview.md %}) - [Third-Party Monitoring Integrations]({% link {{ page.version.version }}/third-party-monitoring-tools.md %}) - [Monitor CockroachDB with Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}) -- [Export Metrics From a CockroachDB Dedicated Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics) \ No newline at end of file +- [Export Metrics From a CockroachDB Dedicated Cluster]({% link cockroachcloud/export-metrics.md %}) \ No newline at end of file diff --git a/src/current/v24.2/drop-owned-by.md b/src/current/v24.2/drop-owned-by.md index e1653f8c8d7..b481bedc177 100644 --- a/src/current/v24.2/drop-owned-by.md +++ b/src/current/v24.2/drop-owned-by.md @@ -73,7 +73,7 @@ SHOW GRANTS FOR maxroach; ~~~ ~~~ - database_name | schema_name | relation_name | grantee | privilege_type | is_grantable + database_name | schema_name | object_name | grantee | privilege_type | is_grantable ----------------+-------------+---------------+----------+----------------+--------------- defaultdb | public | max_kv | maxroach | ALL | t (1 row) @@ -132,7 +132,7 @@ SHOW GRANTS FOR maxroach; ~~~ ~~~ - database_name | schema_name | relation_name | grantee | privilege_type | is_grantable + database_name | schema_name | object_name | grantee | privilege_type | is_grantable ----------------+-------------+---------------+----------+----------------+--------------- defaultdb | public | root_kv | maxroach | ALL | f (1 row) diff --git a/src/current/v24.2/drop-user.md b/src/current/v24.2/drop-user.md index 245534d8fbf..7363d11e443 100644 --- a/src/current/v24.2/drop-user.md +++ b/src/current/v24.2/drop-user.md @@ -37,51 +37,74 @@ In this example, first check a user's privileges. Then, revoke the user's privil {% include_cached copy-clipboard.html %} ~~~ sql -> SHOW GRANTS ON test.customers FOR mroach; -~~~ - -~~~ -+-----------+--------+------------+ -| Table | User | Privileges | -+-----------+--------+------------+ -| customers | mroach | CREATE | -| customers | mroach | INSERT | -| customers | mroach | UPDATE | -+-----------+--------+------------+ -(3 rows) +CREATE DATABASE test; +CREATE TABLE customers (k int, v int); +CREATE USER max; +GRANT ALL ON TABLE customers TO max; ~~~ {% include_cached copy-clipboard.html %} ~~~ sql -> REVOKE CREATE,INSERT,UPDATE ON test.customers FROM mroach; +SHOW GRANTS ON customers FOR max; +~~~ + +~~~ + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + test | public | customers | max | ALL | f +(1 row) ~~~ {% include_cached copy-clipboard.html %} ~~~ sql -> DROP USER mroach; +REVOKE CREATE,INSERT,UPDATE ON customers FROM max; ~~~ ### Remove default privileges In addition to removing a user's privileges, a user's [default privileges]({% link {{ page.version.version }}/security-reference/authorization.md %}#default-privileges) must be removed prior to dropping the user. If you attempt to drop a user with modified default privileges, you will encounter an error like the following: +{% include_cached copy-clipboard.html %} +~~~ sql +DROP USER max; +~~~ + ~~~ -ERROR: role mroach cannot be dropped because some objects depend on it -privileges for default privileges on new relations belonging to role demo in database movr +ERROR: cannot drop role/user max: grants still exist on test.public.customers SQLSTATE: 2BP01 -HINT: USE test; ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM mroach; ~~~ -Run the `HINT` SQL prior to dropping the user. +To see what privileges the user still has remaining on the table, issue the following statement: {% include_cached copy-clipboard.html %} ~~~ sql -USE test; ALTER DEFAULT PRIVILEGES REVOKE ALL ON TABLES FROM mroach; +SHOW GRANTS ON TABLE test.customers FOR max; ~~~ +~~~ + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + test | public | customers | max | BACKUP | f + test | public | customers | max | CHANGEFEED | f + test | public | customers | max | DELETE | f + test | public | customers | max | DROP | f + test | public | customers | max | SELECT | f + test | public | customers | max | ZONECONFIG | f +(6 rows) +~~~ + +To drop the user you must revoke all of the user's remaining privileges: + +{% include_cached copy-clipboard.html %} +~~~ sql +REVOKE ALL ON TABLE public.customers FROM max; +~~~ + +Now dropping the user should succeed: + {% include_cached copy-clipboard.html %} ~~~ sql -> DROP USER mroach; +DROP USER max; ~~~ ## See also diff --git a/src/current/v24.2/enterprise-licensing.md b/src/current/v24.2/enterprise-licensing.md index 756d8e65b35..c23bcf7bd30 100644 --- a/src/current/v24.2/enterprise-licensing.md +++ b/src/current/v24.2/enterprise-licensing.md @@ -9,6 +9,10 @@ CockroachDB distributes a single binary that contains both core and Enterprise f This page lists Enterprise features. For information on how to obtain and set trial and Enterprise license keys for CockroachDB, see the [Licensing FAQs]({% link {{ page.version.version }}/licensing-faqs.md %}#obtain-a-license). +{{site.data.alerts.callout_info}} +{% include common/license/evolving.md %} +{{site.data.alerts.end}} + {% include {{ page.version.version }}/misc/enterprise-features.md %} ## See also diff --git a/src/current/v24.2/example-apps.md b/src/current/v24.2/example-apps.md index e410f1a6ade..bea3cfd1876 100644 --- a/src/current/v24.2/example-apps.md +++ b/src/current/v24.2/example-apps.md @@ -52,7 +52,7 @@ Note that tools with [**community-level** support]({% link {{ page.version.versi | Driver/ORM Framework | Support level | Example apps | |--------------------------------------------+----------------+--------------------------------------------------------| -| [JDBC](https://jdbc.postgresql.org/) | Full | [Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart)
[Simple CRUD]({% link {{ page.version.version }}/build-a-java-app-with-cockroachdb.md %})
[Roach Data (Spring Boot App)](build-a-spring-app-with-cockroachdb-jdbc.html) +| [JDBC](https://jdbc.postgresql.org/) | Full | [Quickstart]({% link cockroachcloud/quickstart.md %})
[Simple CRUD]({% link {{ page.version.version }}/build-a-java-app-with-cockroachdb.md %})
[Roach Data (Spring Boot App)](build-a-spring-app-with-cockroachdb-jdbc.html) | [Hibernate](https://hibernate.org/orm/) | Full | [Simple CRUD]({% link {{ page.version.version }}/build-a-java-app-with-cockroachdb-hibernate.md %})
[Roach Data (Spring Boot App)](build-a-spring-app-with-cockroachdb-jpa.html) | [jOOQ](https://www.jooq.org/) | Full | [Simple CRUD]({% link {{ page.version.version }}/build-a-java-app-with-cockroachdb-jooq.md %}) diff --git a/src/current/v24.2/explain-analyze.md b/src/current/v24.2/explain-analyze.md index 68cb8e8c7d5..19b4e85a535 100644 --- a/src/current/v24.2/explain-analyze.md +++ b/src/current/v24.2/explain-analyze.md @@ -76,7 +76,7 @@ Property | Description `sql cpu time` | The total amount of time spent in the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}). It does not include time spent in the [storage layer]({% link {{ page.version.version }}/architecture/storage-layer.md %}). `regions` | The [regions]({% link {{ page.version.version }}/show-regions.md %}) where the affected nodes were located. `max sql temp disk usage` | ([`DISTSQL`](#distsql-option) option only) How much disk spilling occurs when executing a query. This property is displayed only when the disk usage is greater than zero. -`estimated RUs consumed` | The estimated number of [Request Units (RUs)](https://www.cockroachlabs.com/docs/cockroachcloud/plan-your-cluster-serverless#request-units) consumed by the statement. This property is visible only on CockroachDB {{ site.data.products.serverless }} clusters. +`estimated RUs consumed` | The estimated number of [Request Units (RUs)]({% link cockroachcloud/plan-your-cluster-serverless.md %}#request-units) consumed by the statement. This property is visible only on CockroachDB {{ site.data.products.serverless }} clusters. ### Statement plan tree properties @@ -212,6 +212,7 @@ EXPLAIN ANALYZE SELECT city, AVG(revenue) FROM rides GROUP BY city; execution time: 8ms distribution: full vectorized: true + plan type: custom rows decoded from KV: 500 (88 KiB, 1 gRPC calls) cumulative time spent in KV: 6ms maximum memory usage: 240 KiB @@ -262,6 +263,7 @@ EXPLAIN ANALYZE SELECT * FROM vehicles JOIN rides ON rides.vehicle_id = vehicles execution time: 5ms distribution: local vectorized: true + plan type: custom rows decoded from KV: 515 (90 KiB, 2 gRPC calls) cumulative time spent in KV: 4ms maximum memory usage: 580 KiB @@ -335,6 +337,7 @@ EXPLAIN ANALYZE (VERBOSE) SELECT city, AVG(revenue) FROM rides GROUP BY city; execution time: 5ms distribution: full vectorized: true + plan type: custom rows decoded from KV: 500 (88 KiB, 500 KVs, 1 gRPC calls) cumulative time spent in KV: 4ms maximum memory usage: 240 KiB @@ -397,6 +400,7 @@ EXPLAIN ANALYZE (DISTSQL) SELECT city, AVG(revenue) FROM rides GROUP BY city; execution time: 4ms distribution: full vectorized: true + plan type: custom rows decoded from KV: 500 (88 KiB, 1 gRPC calls) cumulative time spent in KV: 3ms maximum memory usage: 240 KiB @@ -475,6 +479,7 @@ EXPLAIN ANALYZE (REDACT) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue execution time: 6ms distribution: full vectorized: true + plan type: custom rows decoded from KV: 500 (88 KiB, 1 gRPC calls) cumulative time spent in KV: 4ms maximum memory usage: 280 KiB diff --git a/src/current/v24.2/export.md b/src/current/v24.2/export.md index 42f13797d99..b5d40d5e050 100644 --- a/src/current/v24.2/export.md +++ b/src/current/v24.2/export.md @@ -249,7 +249,7 @@ To associate your export objects with a [specific storage class]({% link {{ page ### Export data out of CockroachDB {{ site.data.products.cloud }} -Using `EXPORT` with [`userfile`]({% link {{ page.version.version }}/use-userfile-storage.md %}) is not recommended. You can either export data to [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) or to a local CSV file by using [`cockroach sql --execute`](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/cockroach-sql#general): +Using `EXPORT` with [`userfile`]({% link {{ page.version.version }}/use-userfile-storage.md %}) is not recommended. You can either export data to [cloud storage]({% link {{ page.version.version }}/use-cloud-storage.md %}) or to a local CSV file by using [`cockroach sql --execute`]({% link {{site.current_cloud_version}}/cockroach-sql.md %}#general):
diff --git a/src/current/v24.2/fips.md b/src/current/v24.2/fips.md index 619e5f669ca..e490ddcd296 100644 --- a/src/current/v24.2/fips.md +++ b/src/current/v24.2/fips.md @@ -167,7 +167,7 @@ To download FIPS-ready CockroachDB runtimes, use the following links. {% comment %} Add "Latest" class to release if it's the latest release. {% endcomment %} {% comment %}Version{% endcomment %} - {{ r.release_name }} {% comment %} Add link to each release r. {% endcomment %} + {{ r.release_name }} {% comment %} Add link to each release r. {% endcomment %} {% if r.release_name == latest_hotfix.release_name %} Latest {% comment %} Add "Latest" badge to release if it's the latest release. {% endcomment %} {% endif %} @@ -353,4 +353,4 @@ Default encryption provided by [Google Cloud](https://cloud.google.com/docs/secu ## See also - [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb-linux.md %}) -- [Releases](https://www.cockroachlabs.com/docs/releases) +- [Releases]({% link releases/index.md %}) diff --git a/src/current/v24.2/frequently-asked-questions.md b/src/current/v24.2/frequently-asked-questions.md index d4da65508cf..f01ee54dcfb 100644 --- a/src/current/v24.2/frequently-asked-questions.md +++ b/src/current/v24.2/frequently-asked-questions.md @@ -28,7 +28,7 @@ CockroachDB returns single-row reads in 2ms or less and single-row writes in 4ms ### How easy is it to get started with CockroachDB? -You can get started with CockroachDB with just a few clicks. Sign up for a CockroachDB {{ site.data.products.cloud }} account to create a free CockroachDB {{ site.data.products.serverless }} cluster. For more details, see [Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart). +You can get started with CockroachDB with just a few clicks. Sign up for a CockroachDB {{ site.data.products.cloud }} account to create a free CockroachDB {{ site.data.products.serverless }} cluster. For more details, see [Quickstart]({% link cockroachcloud/quickstart.md %}). Alternatively, you can download a binary or run our official Kubernetes configurations or Docker image. For more details, see [Install CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). @@ -54,8 +54,8 @@ When your cluster spans multiple nodes (physical machines, virtual machines, or For more information about scaling a CockroachDB cluster, see the following docs: -- [Plan Your Serverless Cluster - Cluster scaling](https://www.cockroachlabs.com/docs/cockroachcloud/plan-your-cluster#cluster-scaling) -- [Manage Your Dedicated Cluster - Scale your cluster](https://www.cockroachlabs.com/docs/cockroachcloud/plan-your-cluster?filters=dedicated#cluster-scaling) +- [Plan Your Serverless Cluster - Cluster scaling]({% link cockroachcloud/plan-your-cluster.md %}#cluster-scaling) +- [Manage Your Dedicated Cluster - Scale your cluster]({% link cockroachcloud/plan-your-cluster.md %}?filters=dedicated#cluster-scaling) - [`cockroach start` - Add a node to a cluster]({% link {{ page.version.version }}/cockroach-start.md %}#add-a-node-to-a-cluster) ### How does CockroachDB survive failures? diff --git a/src/current/v24.2/geoserver.md b/src/current/v24.2/geoserver.md index 593e198d66a..f9e12f3e39b 100644 --- a/src/current/v24.2/geoserver.md +++ b/src/current/v24.2/geoserver.md @@ -125,7 +125,7 @@ In the left-hand navigation menu, click **Data > Workspaces**. The **Workspaces* On the **New Workspace** page, enter the following information: - In the **Name** field, enter the text "spatial-tutorial". -- In the **Namespace URI** field, enter the URL for the spatial tutorial where this data set is used: https://www.cockroachlabs.com/docs/stable/spatial-data.html. +- In the **Namespace URI** field, enter the URL for the spatial tutorial where this data set is used: `{% link {{ page.version.version }}/spatial-tutorial.md %}`. Press the **Save** button. diff --git a/src/current/v24.2/goldengate.md b/src/current/v24.2/goldengate.md index 949d4878df8..8b39b2feb43 100644 --- a/src/current/v24.2/goldengate.md +++ b/src/current/v24.2/goldengate.md @@ -45,7 +45,7 @@ For limitations on what PostgreSQL features are supported, refer to Oracle's [De - Ensure [libpg](https://www.postgresql.org/download/linux/redhat/) is available on the Oracle GoldenGate host. -- Ensure you have a secure, [publicly available]({% link cockroachcloud/network-authorization.md %}) CockroachDB cluster running the latest **{{ page.version.version }}** [production release](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users). +- Ensure you have a secure, [publicly available]({% link cockroachcloud/network-authorization.md %}) CockroachDB cluster running the latest **{{ page.version.version }}** [production release]({% link releases/{{ page.version.version }}.md %}), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users). ## Configure Oracle GoldenGate for PostgreSQL @@ -515,6 +515,6 @@ Run the steps in this section on a machine and in a directory where Oracle Golde ## See also - [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) -- [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %}) - [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %}) diff --git a/src/current/v24.2/grant.md b/src/current/v24.2/grant.md index cf4ee91a14d..0bdeb6bac7c 100644 --- a/src/current/v24.2/grant.md +++ b/src/current/v24.2/grant.md @@ -91,12 +91,13 @@ SHOW GRANTS ON DATABASE movr; ~~~ ~~~ - database_name | grantee | privilege_type | is_grantable -----------------+---------+-----------------+-------------- - movr | admin | ALL | true - movr | max | ALL | true - movr | root | ALL | true -(3 rows) + database_name | grantee | privilege_type | is_grantable +----------------+---------+----------------+--------------- + movr | admin | ALL | t + movr | max | ALL | t + movr | public | CONNECT | f + movr | root | ALL | t +(4 rows) ~~~ ### Grant privileges on specific tables in a database @@ -112,11 +113,11 @@ SHOW GRANTS ON TABLE rides; ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | rides | admin | ALL | true - movr | public | rides | max | DELETE | false - movr | public | rides | root | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | rides | admin | ALL | t + movr | public | rides | max | DELETE | f + movr | public | rides | root | ALL | t (3 rows) ~~~ @@ -138,32 +139,24 @@ SHOW GRANTS ON TABLE movr.public.*; database_name | schema_name | table_name | grantee | privilege_type | is_grantable ----------------+-------------+----------------------------+---------+----------------+--------------- movr | public | promo_codes | admin | ALL | t - movr | public | promo_codes | demo | ALL | t movr | public | promo_codes | max | ALL | f movr | public | promo_codes | root | ALL | t movr | public | rides | admin | ALL | t - movr | public | rides | demo | ALL | t movr | public | rides | max | ALL | f - movr | public | rides | max | UPDATE | t movr | public | rides | root | ALL | t movr | public | user_promo_codes | admin | ALL | t - movr | public | user_promo_codes | demo | ALL | t movr | public | user_promo_codes | max | ALL | f movr | public | user_promo_codes | root | ALL | t movr | public | users | admin | ALL | t - movr | public | users | demo | ALL | t movr | public | users | max | ALL | f movr | public | users | root | ALL | t movr | public | vehicle_location_histories | admin | ALL | t - movr | public | vehicle_location_histories | demo | ALL | t movr | public | vehicle_location_histories | max | ALL | f movr | public | vehicle_location_histories | root | ALL | t movr | public | vehicles | admin | ALL | t - movr | public | vehicles | demo | ALL | t movr | public | vehicles | max | ALL | f - movr | public | vehicles | public | SELECT | f movr | public | vehicles | root | ALL | t -(26 rows) +(18 rows) ~~~ To ensure that anytime a new table is created, all the privileges on that table are granted to a user, use [`ALTER DEFAULT PRIVILEGES`]({% link {{ page.version.version }}/alter-default-privileges.md %}): @@ -191,10 +184,9 @@ SHOW GRANTS ON TABLE usertable; database_name | schema_name | table_name | grantee | privilege_type | is_grantable ----------------+-------------+------------+---------+----------------+--------------- movr | public | usertable | admin | ALL | t - movr | public | usertable | demo | ALL | t movr | public | usertable | max | ALL | f movr | public | usertable | root | ALL | t -(4 rows) +(3 rows) ~~~ ### Grant system-level privileges on the entire cluster @@ -203,16 +195,11 @@ SHOW GRANTS ON TABLE usertable; `root` and [`admin`]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role) users have system-level privileges by default, and are capable of granting it to other users and roles using the `GRANT` statement. -For example, the following statement allows the user `maxroach` to use the [`SET CLUSTER SETTING`]({% link {{ page.version.version }}/set-cluster-setting.md %}) statement by assigning the `MODIFYCLUSTERSETTING` system privilege: +For example, the following statement allows the user `max` (created in a [previous example](#grant-privileges-on-databases)) to use the [`SET CLUSTER SETTING`]({% link {{ page.version.version }}/set-cluster-setting.md %}) statement by assigning the `MODIFYCLUSTERSETTING` system privilege: {% include_cached copy-clipboard.html %} ~~~ sql -CREATE USER IF NOT EXISTS maxroach WITH PASSWORD 'setecastronomy'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -GRANT SYSTEM MODIFYCLUSTERSETTING TO maxroach; +GRANT SYSTEM MODIFYCLUSTERSETTING TO max; ~~~ ### Make a table readable to every user in the system @@ -228,12 +215,12 @@ SHOW GRANTS ON TABLE vehicles; ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | vehicles | admin | ALL | true - movr | public | vehicles | max | SELECT | false - movr | public | vehicles | public | SELECT | false - movr | public | vehicles | root | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | vehicles | admin | ALL | t + movr | public | vehicles | max | ALL | f + movr | public | vehicles | public | SELECT | f + movr | public | vehicles | root | ALL | t (4 rows) ~~~ @@ -255,11 +242,11 @@ SHOW GRANTS ON SCHEMA cockroach_labs; ~~~ ~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+----------------+---------+-----------------+-------------- - movr | cockroach_labs | admin | ALL | true - movr | cockroach_labs | max | ALL | true - movr | cockroach_labs | root | ALL | true + database_name | schema_name | grantee | privilege_type | is_grantable +----------------+----------------+---------+----------------+--------------- + movr | cockroach_labs | admin | ALL | t + movr | cockroach_labs | max | ALL | t + movr | cockroach_labs | root | ALL | t (3 rows) ~~~ @@ -269,7 +256,7 @@ To grant privileges on [user-defined types]({% link {{ page.version.version }}/c {% include_cached copy-clipboard.html %} ~~~ sql -CREATE TYPE IF NOT EXISTS status AS ENUM ('available', 'unavailable'); +CREATE TYPE IF NOT EXISTS status AS ENUM ('open', 'closed', 'inactive'); ~~~ {% include_cached copy-clipboard.html %} @@ -283,14 +270,13 @@ SHOW GRANTS ON TYPE status; ~~~ ~~~ - database_name | schema_name | type_name | grantee | privilege_type | is_grantable -----------------+-------------+-----------+---------+-----------------+-------------- - movr | public | status | admin | ALL | true - movr | public | status | demo | ALL | false - movr | public | status | max | ALL | true - movr | public | status | public | USAGE | false - movr | public | status | root | ALL | true -(5 rows) + database_name | schema_name | type_name | grantee | privilege_type | is_grantable +----------------+-------------+-----------+---------+----------------+--------------- + movr | public | status | admin | ALL | t + movr | public | status | max | ALL | t + movr | public | status | public | USAGE | f + movr | public | status | root | ALL | t +(4 rows) ~~~ ### Grant the privilege to manage the replication zones for a database or table @@ -325,9 +311,9 @@ SHOW GRANTS ON ROLE developer; ~~~ ~~~ - role_name | member | is_admin | is_grantable -------------+--------+-----------+----------- - developer | abbey | false | false + role_name | member | is_admin +------------+--------+----------- + developer | abbey | f (1 row) ~~~ @@ -344,9 +330,9 @@ SHOW GRANTS ON ROLE developer; ~~~ ~~~ - role_name | member | is_admin | is_grantable -------------+--------+-----------+----------- - developer | abbey | true | true + role_name | member | is_admin +------------+--------+----------- + developer | abbey | t (1 row) ~~~ @@ -363,12 +349,13 @@ SHOW GRANTS ON TABLE rides; ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+-------------- - movr | public | rides | admin | ALL | true - movr | public | rides | max | UPDATE | true - movr | public | rides | root | ALL | true -(3 rows) + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | rides | admin | ALL | t + movr | public | rides | max | ALL | f + movr | public | rides | max | UPDATE | t + movr | public | rides | root | ALL | t +(4 rows) ~~~ ## See also diff --git a/src/current/v24.2/hashicorp-integration.md b/src/current/v24.2/hashicorp-integration.md index 76a482a071f..7c9f61e8d61 100644 --- a/src/current/v24.2/hashicorp-integration.md +++ b/src/current/v24.2/hashicorp-integration.md @@ -28,10 +28,10 @@ CockroachDB customers can integrate these services, using Vault's KMS secrets en Resources: -- [CMEK overview](https://www.cockroachlabs.com/docs/cockroachcloud/cmek) -- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Dedicated](https://www.cockroachlabs.com/docs/cockroachcloud/managing-cmek) -- [Provisioning GCP KMS Keys and Service Accounts for CMEK](https://www.cockroachlabs.com/docs/cockroachcloud/cmek-ops-gcp) -- [Provisioning AWS KMS Keys and IAM Roles for CMEK](https://www.cockroachlabs.com/docs/cockroachcloud/cmek-ops-aws) +- [CMEK overview]({% link cockroachcloud/cmek.md %}) +- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Dedicated]({% link cockroachcloud/managing-cmek.md %}) +- [Provisioning GCP KMS Keys and Service Accounts for CMEK]({% link cockroachcloud/cmek-ops-gcp.md %}) +- [Provisioning AWS KMS Keys and IAM Roles for CMEK]({% link cockroachcloud/cmek-ops-aws.md %}) ## Use Vault's PKI Secrets Engine to manage a CockroachDB {{ site.data.products.dedicated }} cluster's certificate authority (CA) and client certificates. @@ -41,7 +41,7 @@ By using Vault to manage certificates, you can use only certificates with short Refer to [Transport Layer Security (TLS) and Public Key Infrastructure (PKI)]({% link {{ page.version.version }}/security-reference/transport-layer-security.md %}) for an overview. -Refer to [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters](https://www.cockroachlabs.com/docs/cockroachcloud/client-certs-dedicated) for procedures in involved in administering PKI for a CockroachDB {{ site.data.products.dedicated }} cluster. +Refer to [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters]({% link cockroachcloud/client-certs-dedicated.md %}) for procedures in involved in administering PKI for a CockroachDB {{ site.data.products.dedicated }} cluster. ## Use Vault's PKI Secrets Engine to manage a CockroachDB {{ site.data.products.core }} cluster's certificate authority (CA), server, and client certificates @@ -75,12 +75,12 @@ Vault's [Transit Secrets Engine](https://www.vaultproject.io/docs/secrets/transi ## See also -- [CMEK overview](https://www.cockroachlabs.com/docs/cockroachcloud/cmek) -- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Dedicated](https://www.cockroachlabs.com/docs/cockroachcloud/managing-cmek) -- [Provisioning GCP KMS Keys and Service Accounts for CMEK](https://www.cockroachlabs.com/docs/cockroachcloud/cmek-ops-gcp) -- [Provisioning AWS KMS Keys and IAM Roles for CMEK](https://www.cockroachlabs.com/docs/cockroachcloud/cmek-ops-aws) +- [CMEK overview]({% link cockroachcloud/cmek.md %}) +- [Manage Customer-Managed Encryption Keys (CMEK) for CockroachDB Dedicated]({% link cockroachcloud/managing-cmek.md %}) +- [Provisioning GCP KMS Keys and Service Accounts for CMEK]({% link cockroachcloud/cmek-ops-gcp.md %}) +- [Provisioning AWS KMS Keys and IAM Roles for CMEK]({% link cockroachcloud/cmek-ops-aws.md %}) - [Transport Layer Security (TLS) and Public Key Infrastructure (PKI)]({% link {{ page.version.version }}/security-reference/transport-layer-security.md %}) -- [Certificate Authentication for SQL Clients in Dedicated Clusters](https://www.cockroachlabs.com/docs/cockroachcloud/client-certs-dedicated) +- [Certificate Authentication for SQL Clients in Dedicated Clusters]({% link cockroachcloud/client-certs-dedicated.md %}) - [Manage PKI certificates for a CockroachDB deployment with HashiCorp Vault]({% link {{ page.version.version }}/manage-certs-vault.md %}) - [Using HashiCorp Vault's Dynamic Secrets for Enhanced Database Credential Security in CockroachDB]({% link {{ page.version.version }}/vault-db-secrets-tutorial.md %}) - [Roles]({% link {{ page.version.version }}/security-reference/authorization.md %}#roles) diff --git a/src/current/v24.2/hasura-getting-started.md b/src/current/v24.2/hasura-getting-started.md index 7e129c9404c..6a79da9345f 100644 --- a/src/current/v24.2/hasura-getting-started.md +++ b/src/current/v24.2/hasura-getting-started.md @@ -21,7 +21,7 @@ This tutorial will show you how to configure a Hasura project with a CockroachDB Before you start this tutorial, you need: -- An existing [CockroachDB Cloud](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) cluster, running CockroachDB v22.2 or later. +- An existing [CockroachDB Cloud]({% link cockroachcloud/quickstart.md %}) cluster, running CockroachDB v22.2 or later. - A [Hasura Cloud account](https://hasura.io/docs/latest/getting-started/getting-started-cloud/). ## Configure your cluster @@ -46,7 +46,7 @@ Before you start this tutorial, you need:
1. In the [CockroachDB Cloud console](https://cockroachlabs.cloud/clusters), select your cluster and click **Connect**. -1. If you have not set up [IP Allowlists](https://www.cockroachlabs.com/docs/cockroachcloud/network-authorization#ip-allowlisting) under **Network Security**, follow the instructions to add connections to your cluster from your machine. +1. If you have not set up [IP Allowlists]({% link cockroachcloud/network-authorization.md %}#ip-allowlisting) under **Network Security**, follow the instructions to add connections to your cluster from your machine. 1. Select the SQL user you want to use for the Hasura Cloud connection under **Select SQL user**. If you have not set up a SQL user for this cluster, follow the instructions to create a new SQL user. Be sure to copy and save the password to a secure location. 1. Select **General connection String**. 1. Copy the connection string under **General connection string** and paste it in a secure location. You will use this connection string later to configure Hasura GraphQL Engine with your cluster. @@ -146,7 +146,7 @@ Create a `CRDB_URL` environment variable to store the connection string. ## Add the Hasura Cloud network to your cluster allowlist -Your CockroachDB {{ site.data.products.dedicated }} cluster needs to be configured to [allow incoming client connections](https://www.cockroachlabs.com/docs/cockroachcloud/network-authorization#ip-allowlisting) from Hasura Cloud. +Your CockroachDB {{ site.data.products.dedicated }} cluster needs to be configured to [allow incoming client connections]({% link cockroachcloud/network-authorization.md %}#ip-allowlisting) from Hasura Cloud. 1. In the Hasura Cloud overview page select **Projects**, then click the **Config** icon for your project. diff --git a/src/current/v24.2/how-does-an-enterprise-changefeed-work.md b/src/current/v24.2/how-does-an-enterprise-changefeed-work.md index c936d06a2e9..6ea7b9208a0 100644 --- a/src/current/v24.2/how-does-an-enterprise-changefeed-work.md +++ b/src/current/v24.2/how-does-an-enterprise-changefeed-work.md @@ -5,7 +5,7 @@ toc: true docs_area: stream_data --- -When an {{ site.data.products.enterprise }} changefeed is started on a node, that node becomes the _coordinator_ for the changefeed job (**Node 2** in the diagram). The coordinator node acts as an administrator: keeping track of all other nodes during job execution and the changefeed work as it completes. The changefeed job will run across all nodes in the cluster to access changed data in the watched table. Typically, the [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) for a particular range (or the range’s replica) determines which node emits the changefeed data. +When an {{ site.data.products.enterprise }} changefeed is started on a node, that node becomes the _coordinator_ for the changefeed job (**Node 2** in the diagram). The coordinator node acts as an administrator: keeping track of all other nodes during job execution and the changefeed work as it completes. The changefeed job will run across nodes in the cluster to access changed data in the watched table. The job will evenly distribute changefeed work across the cluster by assigning it to any [replica]({% link {{ page.version.version }}/architecture/replication-layer.md %}) for a particular range, which determines the node that will emit the changefeed data. If a [locality filter]({% link {{ page.version.version }}/changefeeds-in-multi-region-deployments.md %}#run-a-changefeed-job-by-locality) is specified, work is distributed to a node from those that match the locality filter and has the most locality tiers in common with a node that has a replica. Each node uses its aggregator processors to send back checkpoint progress to the coordinator, which gathers this information to update the high-water mark timestamp. The high-water mark acts as a checkpoint for the changefeed’s job progress, and guarantees that all changes before (or at) the timestamp have been emitted. In the unlikely event that the changefeed’s coordinating node were to fail during the job, that role will move to a different node and the changefeed will restart from the last checkpoint. If restarted, the changefeed may [re-emit messages]({% link {{ page.version.version }}/changefeed-messages.md %}#duplicate-messages) starting at the high-water mark time to the current time. Refer to [Ordering Guarantees]({% link {{ page.version.version }}/changefeed-messages.md %}#ordering-and-delivery-guarantees) for detail on CockroachDB's at-least-once-delivery-guarantee and how per-key message ordering is applied. diff --git a/src/current/v24.2/import-performance-best-practices.md b/src/current/v24.2/import-performance-best-practices.md index 8d69fe1ee73..95ca362a14a 100644 --- a/src/current/v24.2/import-performance-best-practices.md +++ b/src/current/v24.2/import-performance-best-practices.md @@ -35,7 +35,7 @@ When importing into a new table, split your dump data into two files: 1. A SQL file containing the table schema. 1. A CSV, delimited, or AVRO file containing the table data. -Convert the schema-only file using the [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page). The Schema Conversion Tool automatically creates a new CockroachDB {{ site.data.products.serverless }} database with the converted schema. {% include cockroachcloud/migration/sct-self-hosted.md %} +Convert the schema-only file using the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}). The Schema Conversion Tool automatically creates a new CockroachDB {{ site.data.products.serverless }} database with the converted schema. {% include cockroachcloud/migration/sct-self-hosted.md %} Then use the [`IMPORT INTO`](import-into.html) statement to import the CSV data into the newly created table: diff --git a/src/current/v24.2/index.md b/src/current/v24.2/index.md index e788d7e305e..e38fb301f4b 100644 --- a/src/current/v24.2/index.md +++ b/src/current/v24.2/index.md @@ -90,7 +90,7 @@ docs_area:

Deploy

  • Production Checklist
  • -
  • CockroachDB Cloud Deployment
  • +
  • CockroachDB Cloud Deployment
  • Kubernetes Overview
  • Performance Profiles
  • Cluster Maintenance
  • @@ -139,9 +139,9 @@ docs_area: diff --git a/src/current/v24.2/insert-data.md b/src/current/v24.2/insert-data.md index 725936d5212..45f44888172 100644 --- a/src/current/v24.2/insert-data.md +++ b/src/current/v24.2/insert-data.md @@ -11,7 +11,7 @@ This page has instructions for getting data into CockroachDB with various progra Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %}). - [Connect to the database]({% link {{ page.version.version }}/connect-to-the-database.md %}). diff --git a/src/current/v24.2/install-cockroachdb-linux.md b/src/current/v24.2/install-cockroachdb-linux.md index c7e067446b2..8f0639764fa 100644 --- a/src/current/v24.2/install-cockroachdb-linux.md +++ b/src/current/v24.2/install-cockroachdb-linux.md @@ -15,13 +15,13 @@ docs_area: deploy {% include cockroachcloud/use-cockroachcloud-instead.md %} -See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). +{% include latest-release-details.md %} -Use one of the options below to install CockroachDB. +Use one of the options below to install CockroachDB. To upgrade an existing cluster, refer to [Upgrade to {{ page.version.version }}]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). To install a FIPS-compliant CockroachDB binary, refer to [Install a FIPS-compliant build of CockroachDB]({% link {{ page.version.version }}/fips.md %}). -CockroachDB on ARM is Generally Available in v23.2.0 and above. For limitations specific to ARM, refer to Limitations. +CockroachDB on ARM is Generally Available in v23.2.0 and above. For limitations specific to ARM, refer to Limitations.

    Download the binary

    @@ -121,7 +121,7 @@ true

    For CockroachDB v22.2.beta-5 and above, Docker images are multi-platform images that contain binaries for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

    Docker images for previous releases contain Intel binaries only. Intel binaries can run on ARM systems, but with a significant reduction in performance.

    -

    CockroachDB on ARM is in Limited Access in v22.2.13, and is experimental in all other versions. Experimental images are not qualified for production use and not eligible for support or uptime SLA commitments.

    +

    CockroachDB on ARM is in Limited Access in v22.2.13, and is experimental in all other versions. Experimental images are not qualified for production use and not eligible for support or uptime SLA commitments.

    1. diff --git a/src/current/v24.2/install-cockroachdb-mac.md b/src/current/v24.2/install-cockroachdb-mac.md index cb16bac1e86..dc2c8382577 100644 --- a/src/current/v24.2/install-cockroachdb-mac.md +++ b/src/current/v24.2/install-cockroachdb-mac.md @@ -15,12 +15,7 @@ docs_area: deploy {% include cockroachcloud/use-cockroachcloud-instead.md %} -See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). - -{% comment %}v22.2.0+{% endcomment %} -{{site.data.alerts.callout_danger}} -

      On macOS ARM systems, spatial features are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead use Rosetta to run the Intel binary or use the Docker image distribution. Refer to GitHub tracking issue for more information.

      -{{site.data.alerts.end}} +{% include latest-release-details.md %} {% capture arch_note_homebrew %}

      For CockroachDB v22.2.x and above, Homebrew installs binaries for your system architecture, either Intel or ARM (Apple Silicon).

      For previous releases, Homebrew installs Intel binaries. Intel binaries can run on ARM systems, but with a significant reduction in performance. CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

      {% endcapture %} @@ -28,7 +23,11 @@ See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.v {% capture arch_note_docker %}

      For CockroachDB v22.2.beta-5 and above, Docker images are multi-platform images that contain binaries for both Intel and ARM (Apple Silicon). Multi-platform images do not take up additional space on your Docker host.

      Docker images for previous releases contain Intel binaries only. Intel binaries can run on ARM systems, but with a significant reduction in performance.

      CockroachDB on ARM for macOS is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments.

      {% endcapture %} -Use one of the options below to install CockroachDB. +{{site.data.alerts.callout_info}} +CockroachDB on macOS is experimental and not suitable for production deployments. +{{site.data.alerts.end}} + +Use one of the options below to install CockroachDB. To upgrade an existing cluster, refer to [Upgrade to {{ page.version.version }}]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). For limitations specific to geospatial features, refer to [Limitations](#limitations).
      @@ -87,7 +86,7 @@ true {% endcapture %}
      -

      Download the binary

      +

      Download the binary

      {{ arch_note_binaries }}
      1. @@ -230,3 +229,9 @@ CockroachDB runtimes built for the ARM architecture have the following limitatio {% include {{ page.version.version }}/misc/install-next-steps.html %} {% include {{ page.version.version }}/misc/diagnostics-callout.html %} + +## Limitations + +{% comment %}v22.2.0+{% endcomment %} + +On macOS ARM systems, [spatial features]{% link {{ page.version.version }}/spatial-data-overview.md %}) are disabled due to an issue with macOS code signing for the GEOS libraries. Users needing spatial features on an ARM Mac may instead [run the Intel binary](#install-the-binary) or use the[Docker container image](#use-docker). Refer to [GitHub issue #93161](https://github.com/cockroachdb/cockroach/issues/93161) for more information. diff --git a/src/current/v24.2/install-cockroachdb-windows.md b/src/current/v24.2/install-cockroachdb-windows.md index 1ad740a2de7..c7d537f77a5 100644 --- a/src/current/v24.2/install-cockroachdb-windows.md +++ b/src/current/v24.2/install-cockroachdb-windows.md @@ -15,85 +15,96 @@ docs_area: deploy {% include cockroachcloud/use-cockroachcloud-instead.md %} -See [Release Notes](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see [Cluster Upgrade](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}/upgrade-cockroach-version). +{% include latest-release-details.md %} -Use one of the options below to install CockroachDB. +{% include windows_warning.md %} + +Use one of the options below to install CockroachDB. To upgrade an existing cluster, refer to [Upgrade to {{ page.version.version }}]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}).

        Download the executable

        - {% include windows_warning.md %} - -1. Using PowerShell, run the following script to download the [CockroachDB {{ page.release_info.version }} archive for Windows](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.windows-6.2-amd64.zip) and copy the binary into your `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ powershell - $ErrorActionPreference = "Stop"; [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;$ProgressPreference = 'SilentlyContinue'; $null = New-Item -Type Directory -Force $env:appdata/cockroach; Invoke-WebRequest -Uri https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.windows-6.2-amd64.zip -OutFile cockroach.zip; Expand-Archive -Force -Path cockroach.zip; Copy-Item -Force "cockroach/cockroach-{{ page.release_info.version }}.windows-6.2-amd64/cockroach.exe" -Destination $env:appdata/cockroach; $Env:PATH += ";$env:appdata/cockroach" - ~~~ - - {{site.data.alerts.callout_success}} - To run a PowerShell script from a file, use syntax like `powershell.exe -Command "{path_to_script}"`. - {{site.data.alerts.end}} - - We recommend adding `;$env:appdata/cockroach` to the `PATH` variable for your system environment so you can execute [cockroach commands](cockroach-commands.html) from any shell. See [Microsoft's environment variable documentation](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_environment_variables#saving-changes-to-environment-variables) for more information. - -1. In PowerShell or the Windows terminal, check that the installation succeeded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - cockroach version - ~~~ - -1. Keep up-to-date with CockroachDB releases and best practices: - - {% include marketo-install.html uid="1" %} +You can download and install CockroachDB for Windows in two ways. Either: + +- **Recommended**: Visit [Releases]({% link releases/index.md %}?filters=windows) to download CockroachDB. The archive contains the `cockroach.exe` binary. Extract the archive and optionally copy the `cockroach.exe` binary into your `PATH` so you can execute [cockroach commands]({% link {{ page.version.version }}/cockroach-commands.md %}) from any shell. Releases are rolled out gradually, so the latest version may not yet be downloadable. + +- Instead of downloading the binary directly, you can use PowerShell to download and install CockroachDB: + 1. Visit [Releases]({% link releases/index.md %}) and make a note of the full version of CockroachDB to install, such as {{ page.version.name }}. Releases are rolled out gradually, so the latest version may not yet be downloadable. + 1. Save the following PowerShell script and replace the following: + - `{ VERSION }`: the full version of CockroachDB to download, such as `{{ page.version.name }}`. Replace this value in both the `Invoke-WebRequest` statement and the `Copy-Item` statement. + - `{ INSTALL_DIRECTORY }`: the local file path where the `cockroachdb.exe` executable will be installed. Replace the value in both the `Destination` argument and the `$Env:PATH` statement, which adds the destination directory to your `PATH`. + + {% include_cached copy-clipboard.html %} + ~~~ powershell + $ErrorActionPreference = "Stop"; + [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;$ProgressPreference = 'SilentlyContinue'; $null = New-Item -Type Directory -Force $env:appdata/cockroach; + Invoke-WebRequest -Uri https://binaries.cockroachdb.com/cockroach-{ VERSION }.windows-6.2-amd64.zip -OutFile cockroach.zip; + Expand-Archive -Force -Path cockroach.zip; + Copy-Item -Force "cockroach/cockroach-{ VERSION }.windows-6.2-amd64/cockroach.exe" -Destination $env:{ INSTALL_DIRECTORY };$Env:PATH += ";$env:{ INSTALL_DIRECTORY }" + ~~~ + 1. Run the PowerShell script. To run a PowerShell script from a file, use syntax like: + + {% include_cached copy-clipboard.html %} + ~~~ powershell + powershell.exe -Command "{path_to_script}" + ~~~ + 1. Check that the installation succeeded and that you can run `cockroach` commands: + {% include_cached copy-clipboard.html %} + ~~~ shell + cockroach version + ~~~

        Use Kubernetes

        -To orchestrate CockroachDB locally using Kubernetes, either with configuration files or the Helm package manager, see Orchestrate CockroachDB Locally with Minikube. +To orchestrate CockroachDB locally using [Kubernetes](https://kubernetes.io/), either with configuration files or the [Helm](https://helm.sh/) package manager, refer to [Orchestrate a local cluster with Kubernetes]({% link {{ page.version.version}}/orchestrate-a-local-cluster-with-kubernetes.md %}). +

        Use Docker

        -{{site.data.alerts.callout_danger}}Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker. Unless you are very experienced with Docker, we recommend starting with a different installation and deployment method.{{site.data.alerts.end}} +This section shows how to install CockroachDB on a Windows host using Docker. On a Linux or Windows Docker host, the image creates a Linux container. + +{{site.data.alerts.callout_danger}} +Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker. Unless you are very experienced with Docker, we recommend starting with a different installation and deployment method. +{{site.data.alerts.end}} -For CockroachDB v22.2.beta-5 and above, Docker images are multi-platform images that contains binaries for both Intel and ARM. CockroachDB on ARM systems is experimental and is not yet qualified for production use and not eligible for support or uptime SLA commitments. Multi-platform images do not take up additional space on your Docker host. +Docker images are [multi-platform images](https://docs.docker.com/build/building/multi-platform/) that contain binaries for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host. -Docker images for previous releases contain Intel binaries only. Intel binaries can run on ARM systems, but with a significant reduction in performance. +Intel binaries can run on ARM systems, but with a significant reduction in performance. 1. Install Docker for Windows. {{site.data.alerts.callout_success}} - Docker for Windows requires 64bit Windows 10 Pro and Microsoft Hyper-V. Please see the official documentation for more details. Note that if your system does not satisfy the stated requirements, you can try using Docker Toolbox. + Docker for Windows requires 64-bit Windows 10 Pro and Microsoft Hyper-V. Refer to the [official documentation](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install) for more details. If your system does not satisfy the stated requirements, you can try using [Docker Toolbox](https://docs.docker.com/toolbox/overview/). {{site.data.alerts.end}} -1. In PowerShell, confirm that the Docker daemon is running in the background: +1. In PowerShell, confirm that Docker is running in the background: {% include_cached copy-clipboard.html %} ~~~ powershell docker version ~~~ - If you see an error, start Docker for Windows. + If you see an error, verify your Docker for Windows installation, then try again. -1. Share your local drives. This makes it possible to mount local directories as data volumes to persist node data after containers are stopped or deleted. +1. [Enable synchronized drive sharing](https://docs.docker.com/desktop/synchronized-file-sharing/) so you can mount local directories as data volumes to persist node data after containers are stopped or deleted. -1. In PowerShell, pull the image for the {{page.release_info.version}} release of CockroachDB from Docker Hub: +1. Visit [Docker Hub](https://hub.docker.com/layers/{{page.release_info.docker_image}}/) and make a note of the full version of CockroachDB to pull. Releases are rolled out gradually, so the latest version may not yet be available. Using the `latest` tag is not recommended; to pull the latest release within a major version, use a tag like `latest-{{ page.version.version }}`. The following command always pulls the `{{ page.version.name }}` image. {% include_cached copy-clipboard.html %} ~~~ powershell - docker pull {{page.release_info.docker_image}}:{{page.release_info.version}} + docker pull {{ page.release_info.docker_image }}:{{ page.version.name }} ~~~ -1. Keep up-to-date with CockroachDB releases and best practices: +
        - {% include marketo-install.html uid="2" %} +Keep up-to-date with CockroachDB releases and best practices: -
+{% include marketo-install.html uid="1" %}

What's next?

diff --git a/src/current/v24.2/kibana.md b/src/current/v24.2/kibana.md index 89ca122908a..7d3cd6341c9 100644 --- a/src/current/v24.2/kibana.md +++ b/src/current/v24.2/kibana.md @@ -8,7 +8,7 @@ docs_area: manage [Kibana](https://www.elastic.co/kibana/) is a platform that visualizes data on the [Elastic Stack](https://www.elastic.co/elastic-stack/). This page shows how to use the [CockroachDB module for Metricbeat](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-cockroachdb.html) to collect metrics exposed by your CockroachDB {{ site.data.products.core }} cluster's [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) in Elasticsearch and how to visualize those metrics with Kibana. {{site.data.alerts.callout_success}} -To export metrics from a CockroachDB {{ site.data.products.cloud }} cluster, refer to [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics) instead of this page. +To export metrics from a CockroachDB {{ site.data.products.cloud }} cluster, refer to [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/export-metrics.md %}) instead of this page. {{site.data.alerts.end}} In this tutorial, you will enable the CockroachDB module for Metricbeat and visualize the data in Kibana. @@ -27,7 +27,7 @@ Either of the following: - Self-managed [Elastic Stack](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) with [Metricbeat installed](https://www.elastic.co/guide/en/beats/metricbeat/7.13/metricbeat-installation-configuration.html) {{site.data.alerts.callout_info}} -This tutorial assumes that you have [started a secure CockroachDB cluster]({% link {{ page.version.version }}/secure-a-cluster.md %}). [CockroachDB {{ site.data.products.cloud }}](https://www.cockroachlabs.com/docs/cockroachcloud) does not expose a compatible monitoring endpoint. +This tutorial assumes that you have [started a secure CockroachDB cluster]({% link {{ page.version.version }}/secure-a-cluster.md %}). [CockroachDB {{ site.data.products.cloud }}]({% link cockroachcloud/index.md %}) does not expose a compatible monitoring endpoint. {{site.data.alerts.end}} ## Step 1. Enable CockroachDB module diff --git a/src/current/v24.2/known-limitations.md b/src/current/v24.2/known-limitations.md index 491a9d15729..a1af91323e8 100644 --- a/src/current/v24.2/known-limitations.md +++ b/src/current/v24.2/known-limitations.md @@ -26,9 +26,10 @@ This section describes limitations from previous CockroachDB versions that still CockroachDB supports the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/protocol.html) and the majority of its syntax. For a list of known differences in syntax and behavior between CockroachDB and PostgreSQL, see [Features that differ from PostgreSQL]({% link {{ page.version.version }}/postgresql-compatibility.md %}#features-that-differ-from-postgresql). -#### `AS OF SYSTEM TIME` does not support placeholders +#### `AS OF SYSTEM TIME` limitations -CockroachDB does not support placeholders in [`AS OF SYSTEM TIME`]({% link {{ page.version.version }}/as-of-system-time.md %}). The time value must be embedded in the SQL string. [#30955](https://github.com/cockroachdb/cockroach/issues/30955) +- {% include {{ page.version.version }}/known-limitations/aost-limitations.md %} +- {% include {{ page.version.version }}/known-limitations/create-statistics-aost-limitation.md %} #### `COPY` syntax not supported by CockroachDB @@ -192,8 +193,10 @@ It is currently not possible to [add a column]({% link {{ page.version.version } ~~~ ~~~ -ERROR: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context +ERROR: failed to construct index entries during backfill: nextval(): unimplemented: cannot evaluate scalar expressions containing sequence operations in this context SQLSTATE: 0A000 +HINT: You have attempted to use a feature that is not yet implemented. +See: https://go.crdb.dev/issue-v/42508/v24.2 ~~~ [#42508](https://github.com/cockroachdb/cockroach/issues/42508) @@ -447,7 +450,6 @@ Accessing the DB Console for a secure cluster now requires login information (i. {% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %} - {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %} - {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %} -- {% include {{ page.version.version }}/known-limitations/fast-cutback-latest-timestamp.md %} #### `RESTORE` limitations @@ -461,6 +463,10 @@ The [`COMMENT ON`]({% link {{ page.version.version }}/comment-on.md %}) statemen As a workaround, take a cluster backup instead, as the `system.comments` table is included in cluster backups. [#44396](https://github.com/cockroachdb/cockroach/issues/44396) +#### `SHOW BACKUP` does not support symlinks for nodelocal + +{% include {{page.version.version}}/known-limitations/show-backup-symlink.md %} + ### Change data capture Change data capture (CDC) provides efficient, distributed, row-level changefeeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. It has the following known limitations: @@ -468,16 +474,14 @@ Change data capture (CDC) provides efficient, distributed, row-level changefeeds {% include {{ page.version.version }}/known-limitations/cdc.md %} - {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %} {% include {{ page.version.version }}/known-limitations/cdc-queries.md %} +- {% include {{ page.version.version }}/known-limitations/cdc-queries-column-families.md %} +- {% include {{ page.version.version }}/known-limitations/changefeed-column-family-message.md %} #### `ALTER CHANGEFEED` limitations {% include {{ page.version.version }}/known-limitations/alter-changefeed-limitations.md %} - {% include {{ page.version.version }}/known-limitations/alter-changefeed-cdc-queries.md %} -### Physical cluster replication - -{% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %} - ### Performance optimization #### Optimizer and locking behavior diff --git a/src/current/v24.2/learn-cockroachdb-sql.md b/src/current/v24.2/learn-cockroachdb-sql.md index e5737403d19..e513ef15542 100644 --- a/src/current/v24.2/learn-cockroachdb-sql.md +++ b/src/current/v24.2/learn-cockroachdb-sql.md @@ -10,7 +10,7 @@ This tutorial guides you through some of the most essential CockroachDB SQL stat For a complete list of supported SQL statements and related details, see [SQL Statements]({% link {{ page.version.version }}/sql-statements.md %}). {{site.data.alerts.callout_info}} -This tutorial is for {{site.data.products.core}} users. If you are working with {{site.data.products.dedicated}} or {{site.data.products.serverless}}, you can run this tutorial against [a cluster running in the cloud](https://www.cockroachlabs.com/docs/cockroachcloud/learn-cockroachdb-sql). +This tutorial is for {{site.data.products.core}} users. If you are working with {{site.data.products.dedicated}} or {{site.data.products.serverless}}, you can run this tutorial against [a cluster running in the cloud]({% link cockroachcloud/learn-cockroachdb-sql.md %}). {{site.data.alerts.end}} ## Start CockroachDB diff --git a/src/current/v24.2/licensing-faqs.md b/src/current/v24.2/licensing-faqs.md index 83d815750c3..20fae864383 100644 --- a/src/current/v24.2/licensing-faqs.md +++ b/src/current/v24.2/licensing-faqs.md @@ -5,6 +5,10 @@ toc: true docs_area: get_started --- +{{site.data.alerts.callout_info}} +{% include common/license/evolving.md %} +{{site.data.alerts.end}} + CockroachDB code is primarily licensed in two ways: - [Business Source License (BSL)](#bsl) diff --git a/src/current/v24.2/log-sql-statistics-to-datadog.md b/src/current/v24.2/log-sql-statistics-to-datadog.md index b3c0e14924c..e6fe5a1913b 100644 --- a/src/current/v24.2/log-sql-statistics-to-datadog.md +++ b/src/current/v24.2/log-sql-statistics-to-datadog.md @@ -88,12 +88,9 @@ Set the [`sql.telemetry.query_sampling.mode` cluster setting]({% link {{ page.ve SET CLUSTER SETTING sql.telemetry.query_sampling.mode = 'statement'; ~~~ -Set the [`sql.telemetry.query_sampling.max_event_frequency` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-telemetry-query-sampling-max-event-frequency) to `100000` to emit query events at a higher rate per second than the default value of `8`, which is extremely conservative for the Datadog HTTP API. This cluster setting controls the max event frequency at which CockroachDB samples queries for telemetry. +Configure the following [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to a value that is dependent on the level of granularity you require and how much performance impact from frequent logging you can tolerate: -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.telemetry.query_sampling.max_event_frequency = 100000; -~~~ +- [`sql.telemetry.query_sampling.max_event_frequency`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-telemetry-query-sampling-max-event-frequency) (default `8`) is the max event frequency (events per second) at which we sample executed queries for telemetry. If sampling mode is set to `'transaction'`, this setting is ignored. In practice, this means that we only sample an executed query if 1/`max_event_frequency` seconds have elapsed since the last executed query was sampled. Sampling impacts the volume of query events emitted which can have downstream impact to workload performance and third-party processing costs. Slowly increase this sampling threshold and monitor potential impact. {{site.data.alerts.callout_info}} The `sql.telemetry.query_sampling.max_event_frequency` cluster setting and the `buffering` options in the `logs.yaml` control how many events are emitted to Datadog and that can be potentially dropped. Adjust this setting and these options according to your workload, depending on the size of events and the queries per second (QPS) observed through monitoring. @@ -117,7 +114,7 @@ SET CLUSTER SETTING sql.telemetry.query_sampling.mode = 'transaction'; Configure the following [cluster settings]({% link {{ page.version.version }}/cluster-settings.md %}) to values that are dependent on the level of granularity you require and how much performance impact from frequent logging you can tolerate: -- [`sql.telemetry.transaction_sampling.max_event_frequency`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-telemetry-transaction-sampling-max-event-frequency) (default `8`) is the max event frequency (events per second) at which we sample transactions for telemetry. If sampling mode is set to 'statement', this setting is ignored. In practice, this means that we only sample a transaction if 1/max_event_frequency seconds have elapsed since the last transaction was sampled. +- [`sql.telemetry.transaction_sampling.max_event_frequency`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-telemetry-transaction-sampling-max-event-frequency) (default `8`) is the max event frequency (events per second) at which we sample transactions for telemetry. If sampling mode is set to `'statement'`, this setting is ignored. In practice, this means that we only sample a transaction if 1/`max_event_frequency` seconds have elapsed since the last transaction was sampled. Sampling impacts the volume of transaction events emitted which can have downstream impact to workload performance and third-party processing costs. Slowly increase this sampling threshold and monitor potential impact. - [`sql.telemetry.transaction_sampling.statement_events_per_transaction.max`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-telemetry-transaction-sampling-statement-events-per-transaction-max) (default `50`) is the maximum number of statement events to log for every sampled transaction. Note that statements that are always captured do not adhere to this limit. Logs are always captured for statements under the following conditions: - Statements that are not of type [DML (data manipulation language)]({% link {{ page.version.version }}/sql-statements.md %}#data-manipulation-statements). These statement types are: - [DDL (data definition language)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) diff --git a/src/current/v24.2/migrate-from-mysql.md b/src/current/v24.2/migrate-from-mysql.md index 2a1adcde9a2..f5b1695f9de 100644 --- a/src/current/v24.2/migrate-from-mysql.md +++ b/src/current/v24.2/migrate-from-mysql.md @@ -17,7 +17,7 @@ If you need help migrating to CockroachDB, contact our -You can use the following MOLT (Migrate Off Legacy Technology) tools to simplify these steps: +You can use the following [MOLT (Migrate Off Legacy Technology) tools]({% link molt/molt-overview.md %}) to simplify these steps: -- [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [MOLT Fetch]({% link molt/molt-fetch.md %}) - [MOLT Verify]({% link molt/molt-verify.md %}) @@ -225,17 +225,17 @@ You can use the following MOLT (Migrate Off Legacy Technology) tools to simplify First, convert your database schema to an equivalent CockroachDB schema: -- Use the [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) to convert your schema line-by-line. This requires a free [CockroachDB {{ site.data.products.cloud }} account](https://www.cockroachlabs.com/docs/cockroachcloud/create-an-account). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices). +- Use the [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) to convert your schema line-by-line. This requires a free [CockroachDB {{ site.data.products.cloud }} account]({% link cockroachcloud/create-an-account.md %}). The tool will convert the syntax, identify [unimplemented features and syntax incompatibilities](#unimplemented-features-and-syntax-incompatibilities) in the schema, and suggest edits according to CockroachDB [best practices](#schema-design-best-practices). {{site.data.alerts.callout_info}} The Schema Conversion Tool accepts `.sql` files from PostgreSQL, MySQL, Oracle, and Microsoft SQL Server. {{site.data.alerts.end}} -- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page#export-the-schema) from the Schema Conversion Tool to finish the conversion manually. +- Alternatively, manually convert the schema according to our [schema design best practices](#schema-design-best-practices){% comment %}and data type mappings{% endcomment %}. You can also [export a partially converted schema]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool to finish the conversion manually. Then import the converted schema to a CockroachDB cluster: -- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.serverless }} or {{ site.data.products.dedicated }} database](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page#migrate-the-schema). -- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page#export-the-schema) from the Schema Conversion Tool. +- For CockroachDB {{ site.data.products.cloud }}, use the Schema Conversion Tool to [migrate the converted schema to a new {{ site.data.products.serverless }} or {{ site.data.products.dedicated }} database]({% link cockroachcloud/migrations-page.md %}#migrate-the-schema). +- For CockroachDB {{ site.data.products.core }}, pipe the [data definition language (DDL)]({% link {{ page.version.version }}/sql-statements.md %}#data-definition-statements) directly into [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}). You can [export a converted schema file]({% link cockroachcloud/migrations-page.md %}#export-the-schema) from the Schema Conversion Tool. {{site.data.alerts.callout_success}} For the fastest performance, you can use a [local, single-node CockroachDB cluster]({% link {{ page.version.version }}/cockroach-start-single-node.md %}#start-a-single-node-cluster) to convert your schema and [check the results of queries](#test-query-results-and-performance). {{site.data.alerts.end}} @@ -262,15 +262,13 @@ Note that CockroachDB defaults to the [`SERIALIZABLE`]({% link {{ page.version.v ##### Shadowing -You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. - -The [CockroachDB Live Migration Service (MOLT LMS)]({% link molt/live-migration-service.md %}) can [perform shadowing]({% link molt/live-migration-service.md %}#shadowing-modes). This is intended only for [testing](#test-query-results-and-performance) or [performing a dry run](#perform-a-dry-run). Shadowing should **not** be used in production when performing a [live migration](#zero-downtime). +You can "shadow" your production workload by executing your source SQL statements on CockroachDB in parallel. You can then [validate the queries](#test-query-results-and-performance) on CockroachDB for consistency, performance, and potential issues with the migration. Shadowing should **not** be used in production when performing a [live migration](#zero-downtime). ##### Test query results and performance You can manually validate your queries by testing a subset of "critical queries" on an otherwise idle CockroachDB cluster: -- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/statements-page) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}). +- Check the application logs for error messages and the API response time. If application requests are slower than expected, use the **SQL Activity** page on the [CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/statements-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-statements-page.md %}) to find the longest-running queries that are part of that application request. If necessary, tune the queries according to our best practices for [SQL performance]({% link {{ page.version.version }}/performance-best-practices-overview.md %}). - Compare the results of the queries and check that they are identical in both the source database and CockroachDB. To do this, you can use [MOLT Verify]({% link molt/molt-verify.md %}). @@ -326,16 +324,14 @@ The following is a high-level overview of the migration steps. The two approache To prioritize consistency and minimize downtime: -1. Set up the [CockroachDB Live Migration Service (MOLT LMS)]({% link molt/live-migration-service.md %}) to proxy for application traffic between your source database and CockroachDB. Do **not** shadow the application traffic. -1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to [**replicate ongoing changes**]({% link molt/molt-fetch.md %}#replication) after it performs the initial load of data into CockroachDB. +1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Enable [**continuous replication**]({% link molt/molt-fetch.md %}#load-data-and-replicate-changes) after it performs the initial load of data into CockroachDB. 1. As the data is migrating, use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB. -1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1-second delay or <1000 rows), use MOLT LMS to begin a [*consistent cutover*]({% link molt/live-migration-service.md %}#consistent-cutover) and stop application traffic to your source database. **This begins downtime.** +1. Once nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), stop application traffic to your source database. **This begins downtime.** 1. Wait for MOLT Fetch to finish replicating changes to CockroachDB. -1. Use MOLT LMS to commit the [consistent cutover]({% link molt/live-migration-service.md %}#consistent-cutover). This resumes application traffic, now to CockroachDB. +1. Perform a [cutover](#cutover-strategy) by resuming application traffic, now to CockroachDB. To achieve zero downtime with inconsistency: -1. Set up the [CockroachDB Live Migration Service (MOLT LMS)]({% link molt/live-migration-service.md %}) to proxy for application traffic between your source database and CockroachDB. Use a [shadowing mode]({% link molt/live-migration-service.md %}#shadowing-modes) to run application queries simultaneously on your source database and CockroachDB. 1. Use [MOLT Fetch]({% link molt/molt-fetch.md %}) to move the source data to CockroachDB. Use the tool to **replicate ongoing changes** after performing the initial load of data into CockroachDB. 1. As the data is migrating, you can use [MOLT Verify]({% link molt/molt-verify.md %}) to validate the consistency of the data between the source database and CockroachDB. 1. After nearly all data from your source database has been moved to CockroachDB (for example, with a <1 second delay or <1000 rows), perform an [*immediate cutover*](#cutover-strategy) by pointing application traffic to CockroachDB. @@ -354,7 +350,7 @@ After you have successfully [conducted the migration](#conduct-the-migration): - [Can a PostgreSQL or MySQL application be migrated to CockroachDB?]({% link {{ page.version.version }}/frequently-asked-questions.md %}#can-a-postgresql-or-mysql-application-be-migrated-to-cockroachdb) - [PostgreSQL Compatibility]({% link {{ page.version.version }}/postgresql-compatibility.md %}) -- [Use the Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Schema Design Overview]({% link {{ page.version.version }}/schema-design-overview.md %}) - [Create a User-defined Schema]({% link {{ page.version.version }}/schema-design-schema.md %}) - [Primary key best practices]({% link {{ page.version.version }}/schema-design-table.md %}#primary-key-best-practices) diff --git a/src/current/v24.2/migration-strategy-lift-and-shift.md b/src/current/v24.2/migration-strategy-lift-and-shift.md index 8c7404f84c8..7417b4c7b3c 100644 --- a/src/current/v24.2/migration-strategy-lift-and-shift.md +++ b/src/current/v24.2/migration-strategy-lift-and-shift.md @@ -72,7 +72,7 @@ It's important to decide which data formats, storage media, and database feature Data formats that can be imported by CockroachDB include: -- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page). +- [SQL]({% link {{ page.version.version }}/schema-design-overview.md %}) for the [schema import]({% link cockroachcloud/migrations-page.md %}). - [CSV]({% link {{ page.version.version }}/migrate-from-csv.md %}) for table data. - [Avro]({% link {{ page.version.version }}/migrate-from-avro.md %}) for table data. @@ -121,7 +121,7 @@ For more information about import performance, see [Import Performance Best Prac ## See also - [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) -- [Use the Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Use the Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}) - [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) - [Migrate and Replicate Data with Qlik Replicate]({% link {{ page.version.version }}/qlik.md %}) diff --git a/src/current/v24.2/monitoring-and-alerting.md b/src/current/v24.2/monitoring-and-alerting.md index 7bf1255a704..a7880f7c02a 100644 --- a/src/current/v24.2/monitoring-and-alerting.md +++ b/src/current/v24.2/monitoring-and-alerting.md @@ -7,7 +7,7 @@ docs_area: manage In addition to CockroachDB's [built-in safeguards against failure]({% link {{ page.version.version }}/frequently-asked-questions.md %}#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. -This page describes the monitoring and observability tools that are built into CockroachDB {{ site.data.products.core }} and shows how to collect your cluster's metrics using external tools like Prometheus's AlertManager for event-based alerting. To export metrics from a CockroachDB {{ site.data.products.cloud }} cluster, refer to [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics) instead of this page. For more details, refer to: +This page describes the monitoring and observability tools that are built into CockroachDB {{ site.data.products.core }} and shows how to collect your cluster's metrics using external tools like Prometheus's AlertManager for event-based alerting. To export metrics from a CockroachDB {{ site.data.products.cloud }} cluster, refer to [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/export-metrics.md %}) instead of this page. For more details, refer to: - [Monitor CockroachDB with Prometheus]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}) - [Third-party Monitoring Tools]({% link {{ page.version.version }}/third-party-monitoring-tools.md %}) diff --git a/src/current/v24.2/movr-flask-deployment.md b/src/current/v24.2/movr-flask-deployment.md index 0ddb93896eb..2ce85089d43 100644 --- a/src/current/v24.2/movr-flask-deployment.md +++ b/src/current/v24.2/movr-flask-deployment.md @@ -21,7 +21,7 @@ In addition to the requirements listed in [Setting Up a Virtual Environment for ## Multi-region database deployment -In production, you want to start a secure CockroachDB cluster, with nodes on machines located in different areas of the world. To deploy CockroachDB in multiple regions, we recommend using [CockroachDB {{ site.data.products.dedicated }}](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart). +In production, you want to start a secure CockroachDB cluster, with nodes on machines located in different areas of the world. To deploy CockroachDB in multiple regions, we recommend using [CockroachDB {{ site.data.products.dedicated }}]({% link cockroachcloud/quickstart.md %}). {{site.data.alerts.callout_info}} You can also deploy CockroachDB manually. For instructions, see the [Manual Deployment]({% link {{ page.version.version }}/manual-deployment.md %}) page of the Cockroach Labs documentation site. @@ -235,7 +235,7 @@ Some time after you have deployed your application, you will likely need to push ## See also {% comment %} [MovR (live demo)](https://movr.cloud){% endcomment %} -- [CockroachDB {{ site.data.products.cloud }} documentation](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) +- [CockroachDB {{ site.data.products.cloud }} documentation]({% link cockroachcloud/quickstart.md %}) - [Google Cloud Platform documentation](https://cloud.google.com/docs/) - [Docker documentation](https://docs.docker.com/) - [Kubernetes documentation](https://kubernetes.io/docs/home/) diff --git a/src/current/v24.2/multiregion-scale-application.md b/src/current/v24.2/multiregion-scale-application.md index 58ebda2d048..ccf9f2282f1 100644 --- a/src/current/v24.2/multiregion-scale-application.md +++ b/src/current/v24.2/multiregion-scale-application.md @@ -42,7 +42,7 @@ Scale the cluster by adding nodes to the cluster in new regions. For instructions on adding nodes to an existing cluster, see one of the following pages: -- For managed CockroachDB {{ site.data.products.cloud }} deployments, see [Cluster Management](https://www.cockroachlabs.com/docs/cockroachcloud/cluster-management). +- For managed CockroachDB {{ site.data.products.cloud }} deployments, see [Cluster Management]({% link cockroachcloud/cluster-management.md %}). - For orchestrated deployments, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}). - For manual deployments, see [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) and [Manual Deployment]({% link {{ page.version.version }}/manual-deployment.md %}). @@ -68,7 +68,7 @@ Scaling application deployments in multiple regions can greatly improve latency For guidance on connecting to CockroachDB from an application deployment, see one of the following pages: -- For connecting to managed, CockroachDB {{ site.data.products.cloud }} deployments, see [Connect to Your CockroachDB {{ site.data.products.dedicated }} Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/connect-to-your-cluster) and [Connect to the Database (CockroachDB {{ site.data.products.dedicated }})]({% link {{ page.version.version }}/connect-to-the-database.md %}?filters=dedicated). +- For connecting to managed, CockroachDB {{ site.data.products.cloud }} deployments, see [Connect to Your CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/connect-to-your-cluster.md %}) and [Connect to the Database (CockroachDB {{ site.data.products.dedicated }})]({% link {{ page.version.version }}/connect-to-the-database.md %}?filters=dedicated). - For connecting to a standard CockroachDB deployment, see [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}) and [Connect to the Database]({% link {{ page.version.version }}/connect-to-the-database.md %}). To limit the latency between the application and the database, each deployment of the application should communicate with the closest database deployment. For details on configuring database connections for individual application deployments, consult your cloud provider's documentation. For an example using Google Cloud services, see [Deploy a Global, Serverless Application]({% link {{ page.version.version }}/movr-flask-deployment.md %}). diff --git a/src/current/v24.2/node-shutdown.md b/src/current/v24.2/node-shutdown.md index 7a18268b572..479b8b6bebf 100644 --- a/src/current/v24.2/node-shutdown.md +++ b/src/current/v24.2/node-shutdown.md @@ -13,7 +13,7 @@ There are two ways to handle node shutdown: - Clients are disconnected, and subsequent connection requests are sent to other nodes. - The node's data store is preserved and will be reused as long as the node restarts in a short time. Otherwise, the node's data is moved to other nodes. - After the node is drained, you can terminate the `cockroach` process, perform maintenance, then restart it. CockroachDB automatically drains a node when [upgrading its cluster version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. + After the node is drained, you can manually terminate the `cockroach` process to perform maintenance, then restart the process for the node to rejoin the cluster. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. A node is also automatically drained when [upgrading its major version]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}). Draining a node is lightweight because it generates little node-to-node traffic across the cluster. - **Decommission a node** to permanently remove it from the cluster, such as when scaling down the cluster or to replace the node due to hardware failure. During decommission: - The node is drained automatically if you have not manually drained it. - The node's data is moved off the node to other nodes. This [replica rebalancing]({% link {{ page.version.version }}/architecture/glossary.md %}#replica) generates a large amount of node-to-node network traffic, so decommissioning a node is considered a heavyweight operation. @@ -62,12 +62,16 @@ After this stage, the node is automatically drained. However, to avoid possible
-An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]{% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. When draining is complete, you can send a `SIGTERM` signal to the `cockroach` process to shut it down, perform the required maintenance, and then restart the `cockroach` process on the node. +An operator [initiates the draining process](#drain-the-node-and-terminate-the-node-process) on the node. Draining a node disconnects clients after active queries are completed, and transfers any [range leases]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) and [Raft leaderships]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) to other nodes, but does not move replicas or data off of the node. + +When draining is complete, the node must be shut down prior to any maintenance. After a 60-second wait at minimum, you can send a `SIGTERM` signal to the `cockroach` process to shut it down. {% include_cached new-in.html version="v24.2" %}The `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) automatically terminates the `cockroach` process after draining completes. + +After you perform the required maintenance, you can restart the `cockroach` process on the node for it to rejoin the cluster. {% capture drain_early_termination_warning %}Do not terminate the `cockroach` process before all of the phases of draining are complete. Otherwise, you may experience latency spikes until the [leases]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) that were on that node have transitioned to other nodes. It is safe to terminate the `cockroach` process only after a node has completed the drain process. This is especially important in a containerized system, to allow all TCP connections to terminate gracefully.{% endcapture %} {{site.data.alerts.callout_danger}} -{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) and the [termination grace period](https://www.cockroachlabs.com/docs/stable/node-shutdown?filters=decommission#termination-grace-period) cluster settings and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. +{{ drain_early_termination_warning }} If necessary, adjust the [`server.shutdown.initial_wait`](#server-shutdown-initial_wait) and the [termination grace period]({% link {{ page.version.version}}/node-shutdown.md %}?filters=decommission#termination-grace-period) cluster settings and adjust your process manager or other deployment tooling to allow adequate time for the node to finish draining before it is terminated or restarted. {{site.data.alerts.end}}
@@ -116,7 +120,10 @@ After draining and decommissioning are complete, an operator [terminates the nod After draining is complete: - If the node was drained automatically because the `cockroach` process received a `SIGTERM` signal, the `cockroach` process is automatically terminated when draining is complete. -- If the node was drained manually because an operator issued a `cockroach node drain` command, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process). +- If the node was drained manually because an operator issued a `cockroach node drain` command: + - {% include_cached new-in.html version="v24.2" %}If you pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. + - If the node's major version is being updated, the `cockroach` process terminates automatically after draining completes. + - Otherwise, the `cockroach` process must be terminated manually. A minimum of 60 seconds after draining is complete, send it a `SIGTERM` signal to terminate it. Refer to [Terminate the node process](#drain-the-node-and-terminate-the-node-process). @@ -363,6 +370,10 @@ Do **not** terminate the node process, delete the storage volume, or remove the
### Drain the node and terminate the node process +{% include_cached new-in.html version="v24.2" %}If you passed the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags), the `cockroach` process terminates automatically after draining completes. Otherwise, terminate the `cockroach` process. + +Perform maintenance on the node as required, then restart the `cockroach` process for the node to rejoin the cluster. + {{site.data.alerts.callout_success}} To drain the node without process termination, see [Drain a node manually](#drain-a-node-manually). {{site.data.alerts.end}} @@ -552,7 +563,7 @@ To drain and shut down a node that was started in the foreground with [`cockroac You can use [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) to drain a node separately from decommissioning the node or terminating the node process. -1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete): +1. Run the `cockroach node drain` command, specifying the ID of the node to drain (and optionally a custom [drain timeout](#drain-timeout) to allow draining more time to complete). {% include_cached new-in.html version="v24.2" %}You can optionally pass the `--shutdown` flag to [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell @@ -615,7 +626,7 @@ This example assumes you will decommission node IDs `4` and `5` of a 5-node clus #### Step 2. Drain the nodes manually -Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain: +Run the [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}) command for each node to be removed, specifying the ID of the node to drain. {% include_cached new-in.html version="v24.2" %}Optionally, pass the `--shutdown` flag of [`cockroach node drain`]({% link {{ page.version.version }}/cockroach-node.md %}#flags) to automatically terminate the `cockroach` process after draining completes. {% include_cached copy-clipboard.html %} ~~~ shell diff --git a/src/current/v24.2/online-schema-changes.md b/src/current/v24.2/online-schema-changes.md index 52816bbe1ee..ade8c70f841 100644 --- a/src/current/v24.2/online-schema-changes.md +++ b/src/current/v24.2/online-schema-changes.md @@ -80,6 +80,7 @@ The following statements use the declarative schema changer by default: - [`ALTER TABLE ... ADD CONSTRAINT ... FOREIGN KEY ... NOT VALID`]({% link {{ page.version.version }}/alter-table.md %}#add-constraint) - [`ALTER TABLE ... VALIDATE CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#drop-constraint) - [`ALTER TABLE ... DROP CONSTRAINT`]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint) +- [`CREATE SEQUENCE`]({% link {{page.version.version}}/create-sequence.md %}) Until all schema change statements are moved to use the declarative schema changer you can enable and disable the declarative schema changer for supported statements using the `sql.defaults.use_declarative_schema_changer` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-use-declarative-schema-changer) and the `use_declarative_schema_changer` [session variable]({% link {{ page.version.version }}/set-vars.md %}#use_declarative_schema_changer). diff --git a/src/current/v24.2/operational-faqs.md b/src/current/v24.2/operational-faqs.md index d482b34322d..10c04d37472 100644 --- a/src/current/v24.2/operational-faqs.md +++ b/src/current/v24.2/operational-faqs.md @@ -9,7 +9,7 @@ docs_area: get_started ## Why is my process hanging when I try to start nodes with the `--background` flag? {{site.data.alerts.callout_info}} -Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. +Cockroach Labs recommends against using the `--background` flag when starting a cluster. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). When testing locally, starting nodes in the foreground is recommended so you can monitor the runtime closely. If you do use `--background`, you should also set `--pid-file`. To stop or restart a cluster, send `SIGTERM` or `SIGHUP` signal to the process ID in the PID file. {{site.data.alerts.end}} @@ -244,7 +244,7 @@ You can also see these metrics in [the Clock Offset graph]({% link {{ page.versi ## How do I prepare for planned node maintenance? -Perform a [node shutdown]({% link {{ page.version.version }}/node-shutdown.md %}#perform-node-shutdown) to temporarily stop a node that you plan to restart. +Perform a [node shutdown]({% link {{ page.version.version }}/node-shutdown.md %}#perform-node-shutdown) to temporarily stop the `cockroach` process on the node. When maintenance is complete and you restart the `cockroach` process, the node will rejoin the cluster. ## See also diff --git a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes-insecure.md b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes-insecure.md index 11383548467..570e3615d41 100644 --- a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes-insecure.md +++ b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes-insecure.md @@ -12,7 +12,7 @@ On top of CockroachDB's built-in automation, you can use a third-party [orchestr This page demonstrates a basic integration with the open-source [Kubernetes](http://kubernetes.io/) orchestration system. Using either the CockroachDB [Helm](https://helm.sh/) chart or a few configuration files, you'll quickly create a 3-node local cluster. You'll run some SQL commands against the cluster and then simulate node failure, watching how Kubernetes auto-restarts without the need for any manual intervention. You'll then scale the cluster with a single command before shutting the cluster down, again with a single command. {{site.data.alerts.callout_info}} -To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart). +To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart]({% link cockroachcloud/quickstart.md %}). {{site.data.alerts.end}} ## Best practices diff --git a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md index c30d7019aa9..7c2b9e6b162 100644 --- a/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md +++ b/src/current/v24.2/orchestrate-a-local-cluster-with-kubernetes.md @@ -13,7 +13,7 @@ On top of CockroachDB's built-in automation, you can use a third-party [orchestr This page demonstrates a basic integration with the open-source [Kubernetes](http://kubernetes.io/) orchestration system. Using either the CockroachDB [Helm](https://helm.sh/) chart or a few configuration files, you'll quickly create a 3-node local cluster. You'll run some SQL commands against the cluster and then simulate node failure, watching how Kubernetes auto-restarts without the need for any manual intervention. You'll then scale the cluster with a single command before shutting the cluster down, again with a single command. {{site.data.alerts.callout_info}} -To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart). +To orchestrate a physically distributed cluster in production, see [Orchestrated Deployments]({% link {{ page.version.version }}/kubernetes-overview.md %}). To deploy a 30-day free CockroachDB {{ site.data.products.dedicated }} cluster instead of running CockroachDB yourself, see the [Quickstart]({% link cockroachcloud/quickstart.md %}). {{site.data.alerts.end}} ## Best practices diff --git a/src/current/v24.2/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md b/src/current/v24.2/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md index b80363d3c2b..b47593bda96 100644 --- a/src/current/v24.2/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md +++ b/src/current/v24.2/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md @@ -1048,7 +1048,7 @@ The upgrade process on Kubernetes is a [staged update](https://kubernetes.io/doc 1. Verify that you can upgrade. - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](https://www.cockroachlabs.com/docs/releases/) and not a testing release (alpha/beta). + To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release]({% link releases/index.md %}) and not a testing release (alpha/beta). Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. @@ -1066,7 +1066,7 @@ The upgrade process on Kubernetes is a [staged update](https://kubernetes.io/doc - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](#scale-the-cluster) to your cluster before beginning your upgrade. {% comment %} -1. Review the [backward-incompatible changes in {{ page.version.version }}](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}#v21-2-0#backward-incompatible-changes) and [deprecated features](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}#v21-2-0#deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. +1. Review the [backward-incompatible changes in {{ page.version.version }}]({% link releases/{{ page.version.version }}.md %}#v21-2-0#backward-incompatible-changes) and [deprecated features]({% link releases/{{ page.version.version }}.md %}#v21-2-0#deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. {% endcomment %} 1. Review the backward-incompatible changes in {{ page.version.version }} and deprecated features. If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. diff --git a/src/current/v24.2/performance-benchmarking-with-tpcc-large.md b/src/current/v24.2/performance-benchmarking-with-tpcc-large.md index 160f537b96d..f8754002d3f 100644 --- a/src/current/v24.2/performance-benchmarking-with-tpcc-large.md +++ b/src/current/v24.2/performance-benchmarking-with-tpcc-large.md @@ -87,26 +87,15 @@ CockroachDB requires TCP communication on two ports: 1. SSH to the first VM where you want to run a CockroachDB node. -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ +1. Visit [Releases]({% link releases/index.md %}?filters=windows) to download and CockroachDB for Linux. Select the architecture of the VM, either Intel or ARM. Releases are rolled out gradually, so the latest version may not yet be available. - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. +1. Extract the binary you downloaded, then optionally copy it into a location in your `PATH`. If you choose to copy it into a system directory, you may need to use `sudo`. -1. Run the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command: +1. Start CockroachDB using the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach start \ + cockroach start \ --insecure \ --advertise-addr= \ --join=,, \ @@ -116,7 +105,7 @@ CockroachDB requires TCP communication on two ports: Each node will start with a [locality]({% link {{ page.version.version }}/cockroach-start.md %}#locality) that includes an artificial "rack number" (e.g., `--locality=rack=0`). Use 81 racks for 81 nodes so that 1 node will be assigned to each rack. -1. Repeat steps 1 - 3 for the other 80 VMs for CockroachDB nodes. Each time, be sure to: +1. Repeat these steps for the other 80 VMs for CockroachDB nodes. Each time, be sure to: - Adjust the `--advertise-addr` flag. - Set the [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag to the appropriate "rack number". @@ -124,7 +113,7 @@ CockroachDB requires TCP communication on two ports: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach init --insecure --host=
+ cockroach init --insecure --host=
~~~ ## Step 3. Configure the cluster diff --git a/src/current/v24.2/performance-benchmarking-with-tpcc-medium.md b/src/current/v24.2/performance-benchmarking-with-tpcc-medium.md index a31c09ce23a..73947b42a6d 100644 --- a/src/current/v24.2/performance-benchmarking-with-tpcc-medium.md +++ b/src/current/v24.2/performance-benchmarking-with-tpcc-medium.md @@ -87,26 +87,15 @@ CockroachDB requires TCP communication on two ports: 1. SSH to the first VM where you want to run a CockroachDB node. -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ +1. Visit [Releases]({% link releases/index.md %}?filters=windows) to download and CockroachDB for Linux. Select the architecture of the VM, either Intel or ARM. Releases are rolled out gradually, so the latest version may not yet be available. - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. +1. Extract the binary you downloaded, then optionally copy it into a location in your `PATH`. If you choose to copy it into a system directory, you may need to use `sudo`. 1. Run the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach start \ + cockroach start \ --insecure \ --advertise-addr= \ --join=,, \ @@ -124,7 +113,7 @@ CockroachDB requires TCP communication on two ports: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach init --insecure --host=
+ cockroach init --insecure --host=
~~~ ## Step 3. Configure the cluster diff --git a/src/current/v24.2/performance-benchmarking-with-tpcc-small.md b/src/current/v24.2/performance-benchmarking-with-tpcc-small.md index 0c8111dd101..e7bfd9e2a1d 100644 --- a/src/current/v24.2/performance-benchmarking-with-tpcc-small.md +++ b/src/current/v24.2/performance-benchmarking-with-tpcc-small.md @@ -76,26 +76,15 @@ CockroachDB requires TCP communication on two ports: 1. SSH to the first VM where you want to run a CockroachDB node. -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ +1. Visit [Releases]({% link releases/index.md %}?filters=windows) to download and CockroachDB for Linux. Select the architecture of the VM, either Intel or ARM. Releases are rolled out gradually, so the latest version may not yet be available. - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. +1. Extract the binary you downloaded, then optionally copy it into a location in your `PATH`. If you choose to copy it into a system directory, you may need to use `sudo`. 1. Run the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach start \ + cockroach start \ --insecure \ --advertise-addr= \ --join=,, \ @@ -108,7 +97,7 @@ CockroachDB requires TCP communication on two ports: {% include_cached copy-clipboard.html %} ~~~ shell - $ cockroach init --insecure --host=
+ cockroach init --insecure --host=
~~~ ## Step 3. Import the TPC-C dataset diff --git a/src/current/v24.2/performance-recipes.md b/src/current/v24.2/performance-recipes.md index 3447b25cf31..c0de8f2feb5 100644 --- a/src/current/v24.2/performance-recipes.md +++ b/src/current/v24.2/performance-recipes.md @@ -25,11 +25,11 @@ This section describes how to use CockroachDB commands and dashboards to identif
    -
  • The Transactions page in the CockroachDB {{ site.data.products.cloud }} Console or DB Console shows transactions with Waiting status.
  • +
  • The Transactions page in the CockroachDB {{ site.data.products.cloud }} Console or DB Console shows transactions with Waiting status.
  • Your application is experiencing degraded performance with SQLSTATE: 40001 and a transaction retry error message.
  • Querying the crdb_internal.transaction_contention_events table indicates that your transactions have experienced contention.
  • -
  • The SQL Statement Contention graph in the CockroachDB {{ site.data.products.cloud }} Console or DB Console is showing spikes over time.
  • -
  • The Transaction Restarts graph in the CockroachDB {{ site.data.products.cloud }} Console or DB Console is showing spikes in retries over time.
  • +
  • The SQL Statement Contention graph in the CockroachDB {{ site.data.products.cloud }} Console or DB Console is showing spikes over time.
  • +
  • The Transaction Restarts graph in the CockroachDB {{ site.data.products.cloud }} Console or DB Console is showing spikes in retries over time.
  • @@ -95,19 +95,19 @@ This section provides solutions for common performance issues in your applicatio These are indicators that a transaction is trying to access a row that has been ["locked"]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#writing) by another, concurrent transaction issuing a [write]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#write-intents) or [locking read]({% link {{ page.version.version }}/select-for-update.md %}#lock-strengths). -- The **Active Executions** table on the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/transactions-page) or [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %}#active-executions-table)) shows transactions with `Waiting` in the **Status** column. You can sort the table by **Time Spent Waiting**. +- The **Active Executions** table on the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/transactions-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %}#active-executions-table)) shows transactions with `Waiting` in the **Status** column. You can sort the table by **Time Spent Waiting**. - Querying the [`crdb_internal.cluster_locks`]({% link {{ page.version.version }}/crdb-internal.md %}#cluster_locks) table shows transactions where [`granted`]({% link {{ page.version.version }}/crdb-internal.md %}#cluster-locks-columns) is `false`. These are indicators that lock contention occurred in the past: - Querying the [`crdb_internal.transaction_contention_events`]({% link {{ page.version.version }}/crdb-internal.md %}#transaction_contention_events) table `WHERE contention_type='LOCK_WAIT'` indicates that your transactions have experienced lock contention. - - This is also shown in the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/insights-page#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)). Transaction executions will display the [**High Contention** insight]({% link {{ page.version.version }}/ui-insights-page.md %}#high-contention). + - This is also shown in the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/insights-page.md %}#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)). Transaction executions will display the [**High Contention** insight]({% link {{ page.version.version }}/ui-insights-page.md %}#high-contention). {{site.data.alerts.callout_info}} {%- include {{ page.version.version }}/performance/sql-trace-txn-enable-threshold.md -%} {{site.data.alerts.end}} -- The **SQL Statement Contention** graph ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/metrics-page#sql-statement-contention) and [DB Console]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#sql-statement-contention)) is showing spikes over time. +- The **SQL Statement Contention** graph ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/metrics-page.md %}#sql-statement-contention) and [DB Console]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#sql-statement-contention)) is showing spikes over time. SQL Statement Contention graph in DB Console If a long-running transaction is waiting due to [lock contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention): @@ -124,11 +124,11 @@ These are indicators that a transaction has failed due to [contention]({% link { - A [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) with `SQLSTATE: 40001`, the string [`restart transaction`]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction), and an error code such as [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`RETRY_SERIALIZABLE`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_serializable), is emitted to the client. These errors are typically seen under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) and not [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. - Querying the [`crdb_internal.transaction_contention_events`]({% link {{ page.version.version }}/crdb-internal.md %}#transaction_contention_events) table `WHERE contention_type='SERIALIZATION_CONFLICT'` indicates that your transactions have experienced serialization conflicts. - - This is also shown in the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/insights-page#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)). Transaction executions will display the [**Failed Execution** insight due to a serialization conflict]({% link {{ page.version.version }}/ui-insights-page.md %}#serialization-conflict-due-to-transaction-contention). + - This is also shown in the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/insights-page.md %}#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)). Transaction executions will display the [**Failed Execution** insight due to a serialization conflict]({% link {{ page.version.version }}/ui-insights-page.md %}#serialization-conflict-due-to-transaction-contention). These are indicators that transaction retries occurred in the past: -- The **Transaction Restarts** graph ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/metrics-page#transaction-restarts) and [DB Console]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#transaction-restarts) is showing spikes in transaction retries over time. +- The **Transaction Restarts** graph ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/metrics-page.md %}#transaction-restarts) and [DB Console]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#transaction-restarts) is showing spikes in transaction retries over time. {% include {{ page.version.version }}/performance/transaction-retry-error-actions.md %} @@ -140,7 +140,7 @@ When running under `SERIALIZABLE` isolation, implement [client-side retry handli ##### Identify conflicting transactions -- In the **Active Executions** table on the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/transactions-page) or [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %}#active-executions-table)), look for a **waiting** transaction (`Waiting` status). +- In the **Active Executions** table on the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/transactions-page.md %}) or [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %}#active-executions-table)), look for a **waiting** transaction (`Waiting` status). {{site.data.alerts.callout_success}} If you see many waiting transactions, a single long-running transaction may be blocking transactions that are, in turn, blocking others. In this case, sort the table by **Time Spent Waiting** to find the transaction that has been waiting for the longest amount of time. Unblocking this transaction may unblock the other transactions. {{site.data.alerts.end}} @@ -160,8 +160,8 @@ When running under `SERIALIZABLE` isolation, implement [client-side retry handli To identify transactions that experienced [lock contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) in the past: -- In the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/insights-page#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)), look for a transaction with the **High Contention** insight. Click the transaction's execution ID and view the transaction execution details, including the details of the blocking transaction. -- Visit the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/transactions-page) and [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %})) and sort transactions by **Contention Time**. +- In the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/insights-page.md %}#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)), look for a transaction with the **High Contention** insight. Click the transaction's execution ID and view the transaction execution details, including the details of the blocking transaction. +- Visit the **Transactions** page ([CockroachDB {{ site.data.products.cloud }} Console]({% link cockroachcloud/transactions-page.md %}) and [DB Console]({% link {{ page.version.version }}/ui-transactions-page.md %})) and sort transactions by **Contention Time**. To view tables and indexes that experienced [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention): diff --git a/src/current/v24.2/performance.md b/src/current/v24.2/performance.md index 4fcad5c64ad..6f1e574cf15 100644 --- a/src/current/v24.2/performance.md +++ b/src/current/v24.2/performance.md @@ -6,7 +6,7 @@ toc_not_nested: true docs_area: reference.benchmarking --- -CockroachDB delivers predictable throughput and latency at all scales on commodity hardware. This page provides an overview of the performance profiles you can expect, based on Cockroach Labs's extensive testing using industry-standard benchmarks like TPC-C and Sysbench. +CockroachDB delivers predictable throughput and latency at all scales on commodity hardware. This page provides an overview of the performance profiles you can expect, based on Cockroach Labs's extensive testing using the TPC-C industry-standard benchmark. For instructions to reproduce the TPC-C results listed here, see [Performance Benchmarking with TPC-C]({% link {{ page.version.version }}/performance-benchmarking-with-tpcc-large.md %}). If you fail to achieve similar results, there is likely a problem in either the hardware, workload, or test design. @@ -18,7 +18,7 @@ This document is about CockroachDB performance on benchmarks. For guidance on tu TPC-C provides the most realistic and objective measure for OLTP performance at various scale factors. During testing, CockroachDB v21.1 processed **1.68M tpmC with 140,000 warehouses, resulting in an efficiency score of 95%.** As shown in the following chart, this was a 40% improvement over the results from CockroachDB 19.2. -For a refresher on what exactly TPC-C is and how it is measured, see [Benchmarks used](#benchmarks-used). +For a refresher on what exactly TPC-C is and how it is measured, see [Benchmark details](#benchmark-details). CockroachDB achieves this performance in [`SERIALIZABLE` isolation]({% link {{ page.version.version }}/demo-serializable.md %}), the strongest isolation level in the SQL standard. @@ -51,25 +51,17 @@ This chart shows that adding nodes increases throughput linearly while holding p ## Throughput -Cockroach Labs believes TPC-C provides the most realistic and objective measure for OLTP throughput. In the real world, applications generate transactional workloads that consist of a combination of reads and writes, possibly with concurrency and likely without all data being loaded into memory. If you see benchmark results quoted in QPS, take them with a grain of salt, because anything as simple as a “query” is unlikely to be representative of the workload you need to run in practice. - -With that in mind, however, you can use [Sysbench](https://github.com/akopytov/sysbench) for straight-forward throughput benchmarking. For example, on a 3-node cluster of AWS `c5d.9xlarge` machines across AWS’s `us-east-1` region (availability zones `a`, `b`, and `c`), CockroachDB can achieve 118,000 inserts per second on the `oltp_insert` workload and 336,000 reads per second on the `oltp_point_select` workload. We used a concurrency of 480 on the `oltp_insert` workload and a concurrency of 216 on the `oltp_point_select` workload to generate these numbers. - -Sysbench Throughput +Cockroach Labs believes TPC-C provides the most realistic and objective measure for OLTP throughput. In the real world, applications generate transactional workloads that consist of a combination of reads and writes, possibly with concurrency and likely without all data being loaded into memory. Approach benchmark results quoted in QPS with caution, because anything as simple as a “query” is unlikely to be representative of the workload you need to run in practice. ## Latency CockroachDB returns single-row **reads in 1 ms** and processes single-row **writes in 2 ms** within a single availability zone. As you expand out to multiple availability zones or multiple regions, latency can increase due to distance and the limitation of the speed of light. -For benchmarking latency, again, Cockroach Labs believes TPC-C provides the most realistic and objective measure, since it encompasses the latency distribution, including tail performance. However, you can use [Sysbench](https://github.com/akopytov/sysbench) for straight-forward latency benchmarking. - -For example, when running Sysbench on a 3-node cluster of AWS `c5d.9xlarge` machines across AWS `us-east-1` region (availability zones `a`, `b`, and `c`), CockroachDB can achieve an average of 4.3ms on the `oltp_insert` workload and 0.7ms on the `oltp_point_select` workload. - -Sysbench Latency +For benchmarking latency, again, Cockroach Labs believes TPC-C provides the most realistic and objective measure, since it encompasses the latency distribution, including tail performance. CockroachDB provides a number of important tuning practices for both single-region and multi-region deployments, including [secondary indexes]({% link {{ page.version.version }}/indexes.md %}) and various [data topologies]({% link {{ page.version.version }}/topology-patterns.md %}) to achieve low latency. -## Benchmarks used +## Benchmark details ### TPC-C @@ -87,10 +79,6 @@ TPC-C specifies restrictions on the maximum throughput achievable per warehouse. Because TPC-C is constrained to a maximum amount of throughput per warehouse, we often discuss TPC-C performance as the **maximum number of warehouses for which a database can maintain the maximum throughput per minute.** For a full description of the benchmark, see [TPC BENCHMARK™ C Standard Specification Revision 5.11](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-c_v5.11.0.pdf). -### Sysbench - -[Sysbench](https://github.com/akopytov/sysbench) is a popular tool that allows for basic throughput and latency testing. Cockroach Labs prefers the more complex TPC-C, but Sysbench’s `oltp_insert` and `oltp_point_select` workloads are reasonable alternatives for understanding basic throughput and latency across different databases. - ## Performance limitations CockroachDB has no theoretical limitations to scaling, throughput, latency, or concurrency other than the speed of light. diff --git a/src/current/v24.2/physical-cluster-replication-overview.md b/src/current/v24.2/physical-cluster-replication-overview.md index f5a06cae94b..b0f97d94904 100644 --- a/src/current/v24.2/physical-cluster-replication-overview.md +++ b/src/current/v24.2/physical-cluster-replication-overview.md @@ -37,7 +37,6 @@ You can use PCR in a disaster recovery plan to: ## Known limitations {% include {{ page.version.version }}/known-limitations/physical-cluster-replication.md %} -- {% include {{ page.version.version }}/known-limitations/fast-cutback-latest-timestamp.md %} - {% include {{ page.version.version }}/known-limitations/pcr-scheduled-changefeeds.md %} - {% include {{ page.version.version }}/known-limitations/cutover-stop-application.md %} @@ -45,7 +44,6 @@ You can use PCR in a disaster recovery plan to: Cockroach Labs supports PCR up to the following scale: -- Cluster size: 30TB - Writes: 10,000 writes per second - Reads: 18,000 reads per second @@ -70,16 +68,12 @@ For more comprehensive guides, refer to: - [Physical Cluster Replication Monitoring]({% link {{ page.version.version }}/physical-cluster-replication-monitoring.md %}): for detail on metrics and observability into a replication stream. - [Cut Over from a Primary Cluster to a Standby Cluster]({% link {{ page.version.version }}/cutover-replication.md %}): for a guide on how to complete a replication stream and cut over to the standby cluster. -### Cluster versions and upgrades - -The standby cluster host will need to be at the same major version as, or one version ahead of, the primary's virtual cluster at the time of [cutover]({% link {{ page.version.version }}/cutover-replication.md %}). - -To [upgrade]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}) a virtualized cluster, you must carefully and manually apply the upgrade. For details, refer to [Upgrades]({% link {{ page.version.version }}/work-with-virtual-clusters.md %}#upgrade-a-cluster) in the [Cluster Virtualization Overview]({% link {{ page.version.version }}/cluster-virtualization-overview.md %}). - -When PCR is enabled, we recommend following this procedure on the standby cluster first, before upgrading the primary cluster. It is preferable to avoid a situation in which the virtual cluster, which is being replicated, is a version higher than what the standby cluster can serve if you were to cut over. - ### Start clusters +{{site.data.alerts.callout_danger}} +Before starting PCR, ensure that the standby cluster is at the same version, or one version ahead of, the primary cluster. For more details, refer to [Cluster versions and upgrades](#cluster-versions-and-upgrades). +{{site.data.alerts.end}} + To use PCR on clusters, you must [initialize]({% link {{ page.version.version }}/cockroach-start.md %}) the primary and standby CockroachDB clusters with the `--virtualized` and `--virtualized-empty` flags respectively. This enables [cluster virtualization]({% link {{ page.version.version }}/cluster-virtualization-overview.md %}) and sets up each cluster ready for replication. The active primary cluster that serves application traffic: @@ -144,6 +138,22 @@ Statement | Action [`SHOW VIRTUAL CLUSTER`]({% link {{ page.version.version }}/show-virtual-cluster.md %}) | Show all virtual clusters. [`DROP VIRTUAL CLUSTER`]({% link {{ page.version.version }}/drop-virtual-cluster.md %}) | Remove a virtual cluster. +### Cluster versions and upgrades + +{{site.data.alerts.callout_danger}} +The standby cluster must be at the same version as, or one version ahead of, the primary's virtual cluster. +{{site.data.alerts.end}} + +When PCR is enabled, upgrade with the following procedure. This upgrades the standby cluster before the primary cluster. Within the primary and standby CockroachDB clusters, the system virtual cluster must be at a cluster version greater than or equal to the virtual cluster: + +1. [Upgrade the binaries]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}#step-4-perform-the-rolling-upgrade) on the primary and standby clusters. Replace the binary on each node of the cluster and restart the node. +1. [Finalize]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}#step-6-finish-the-upgrade) the upgrade on the standby's system virtual cluster. +1. [Finalize]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}#step-6-finish-the-upgrade) the upgrade on the primary's system virtual cluster. +1. [Finalize]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}#step-6-finish-the-upgrade) the upgrade on the standby's virtual cluster. +1. [Finalize]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}#step-6-finish-the-upgrade) the upgrade on the primary's virtual cluster. + +The standby cluster must be at the same version as, or one version ahead of, the primary's virtual cluster at the time of [cutover]({% link {{ page.version.version }}/cutover-replication.md %}). + ## Demo video Learn how to use PCR to meet your RTO and RPO requirements with the following demo: diff --git a/src/current/v24.2/plpgsql.md b/src/current/v24.2/plpgsql.md index 0d028e19419..189fa85c0c4 100644 --- a/src/current/v24.2/plpgsql.md +++ b/src/current/v24.2/plpgsql.md @@ -2,7 +2,6 @@ title: PL/pgSQL summary: PL/pgSQL is a procedural language that you can use within user-defined functions and stored procedures. toc: true -key: sql-expressions.html docs_area: reference.sql --- diff --git a/src/current/v24.2/qlik.md b/src/current/v24.2/qlik.md index ba48ae4d92e..f0baaa461a2 100644 --- a/src/current/v24.2/qlik.md +++ b/src/current/v24.2/qlik.md @@ -42,7 +42,7 @@ This page describes the Qlik Replicate functionality at a high level. For detail Complete the following items before using Qlik Replicate: -- Ensure you have a secure, publicly available CockroachDB cluster running the latest **{{ page.version.version }}** [production release](https://www.cockroachlabs.com/docs/releases), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) that you can use for your Qlik Replicate target endpoint. +- Ensure you have a secure, publicly available CockroachDB cluster running the latest **{{ page.version.version }}** [production release]({% link releases/index.md %}), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) that you can use for your Qlik Replicate target endpoint. - Set the following [session variables]({% link {{ page.version.version }}/set-vars.md %}#supported-variables) using [`ALTER ROLE ... SET {session variable}`]({% link {{ page.version.version }}/alter-role.md %}#set-default-session-variable-values-for-a-role): {% include_cached copy-clipboard.html %} @@ -65,7 +65,7 @@ Complete the following items before using Qlik Replicate: - If the output of [`SHOW SCHEDULES`]({% link {{ page.version.version }}/show-schedules.md %}) shows any backup schedules, run [`ALTER BACKUP SCHEDULE {schedule_id} SET WITH revision_history = 'false'`]({% link {{ page.version.version }}/alter-backup-schedule.md %}) for each backup schedule. - If the output of `SHOW SCHEDULES` does not show backup schedules, [contact Support](https://support.cockroachlabs.com) to disable revision history for cluster backups. - Manually create all schema objects in the target CockroachDB cluster. Qlik can create a basic schema, but does not create indexes or constraints such as foreign keys and defaults. - - If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail. + - If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables, or add transformation rules. If you make substantial schema changes, the Qlik Replicate migration may fail. {{site.data.alerts.callout_info}} All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices). @@ -78,7 +78,7 @@ You can use Qlik Replicate to migrate tables from a source database to Cockroach In the Qlik Replicate interface, the source database is configured as a **source endpoint** with the appropriate dialect, and CockroachDB is configured as a PostgreSQL **target endpoint**. For information about where to find the CockroachDB connection parameters, see [Connect to a CockroachDB Cluster]({% link {{ page.version.version }}/connect-to-the-database.md %}). {{site.data.alerts.callout_info}} -To use a CockroachDB {{ site.data.products.serverless }} cluster as the target endpoint, set the **Database name** to `{serverless-hostname}.{database-name}` in the Qlik Replicate dialog. For details on how to find these parameters, see [Connect to a CockroachDB Serverless cluster](https://www.cockroachlabs.com/docs/cockroachcloud/connect-to-a-serverless-cluster?filters=connection-parameters#connect-to-your-cluster). Also set **Secure Socket Layer (SSL) mode** to **require**. +To use a CockroachDB {{ site.data.products.serverless }} cluster as the target endpoint, set the **Database name** to `{serverless-hostname}.{database-name}` in the Qlik Replicate dialog. For details on how to find these parameters, see [Connect to a CockroachDB Serverless cluster]({% link cockroachcloud/connect-to-a-serverless-cluster.md %}?filters=connection-parameters#connect-to-your-cluster). Also set **Secure Socket Layer (SSL) mode** to **require**. {{site.data.alerts.end}} - To perform both an initial load and continuous replication of ongoing changes to the target tables, select **Full Load** and **Apply Changes**. This minimizes downtime for your migration. @@ -97,7 +97,7 @@ In the Qlik Replicate interface, CockroachDB is configured as a PostgreSQL **sou ## See also - [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) -- [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %}) - [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %}) - [Migrate with AWS Database Migration Service (DMS)]({% link {{ page.version.version }}/aws-dms.md %}) diff --git a/src/current/v24.2/query-behavior-troubleshooting.md b/src/current/v24.2/query-behavior-troubleshooting.md index 64733694cd0..0aa4b7bd2a9 100644 --- a/src/current/v24.2/query-behavior-troubleshooting.md +++ b/src/current/v24.2/query-behavior-troubleshooting.md @@ -21,7 +21,7 @@ Such long-running queries can hold locks for (practically) unlimited durations. Refer to the performance tuning recipe for [identifying and unblocking a waiting transaction]({% link {{ page.version.version }}/performance-recipes.md %}#waiting-transaction). -If you experience this issue on a CockroachDB {{ site.data.products.serverless }} cluster, your cluster may be throttled or disabled because you've reached your monthly [resource limits](https://www.cockroachlabs.com/docs/cockroachcloud/troubleshooting-page#hanging-or-stuck-queries). +If you experience this issue on a CockroachDB {{ site.data.products.serverless }} cluster, your cluster may be throttled or disabled because you've reached your monthly [resource limits]({% link cockroachcloud/troubleshooting-page.md %}#hanging-or-stuck-queries). ### Identify slow queries diff --git a/src/current/v24.2/query-data.md b/src/current/v24.2/query-data.md index 6ea76993f2a..1cf8ee25712 100644 --- a/src/current/v24.2/query-data.md +++ b/src/current/v24.2/query-data.md @@ -11,7 +11,7 @@ This page has instructions for making SQL [selection queries][selection] against Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %}). - [Connect to the database]({% link {{ page.version.version }}/connect-to-the-database.md %}). - [Insert data]({% link {{ page.version.version }}/insert-data.md %}) that you now want to run queries against. diff --git a/src/current/v24.2/read-committed.md b/src/current/v24.2/read-committed.md index f361ab71aa6..c8576c27a64 100644 --- a/src/current/v24.2/read-committed.md +++ b/src/current/v24.2/read-committed.md @@ -154,7 +154,7 @@ Starting a transaction as `READ COMMITTED` does not affect the [default isolatio - [Constraint]({% link {{ page.version.version }}/constraints.md %}) violations will abort transactions at all isolation levels. -- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error can be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting or the [`results_buffer_size`]({% link {{ page.version.version }}/session-variables.md %}#results-buffer-size) session variable. +- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error can be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting. ### Concurrency anomalies @@ -919,4 +919,4 @@ SELECT * FROM schedules - [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %}) - [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/) - [Read Committed RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20230122_read_committed_isolation.md) -- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) \ No newline at end of file +- [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) diff --git a/src/current/v24.2/restoring-backups-across-versions.md b/src/current/v24.2/restoring-backups-across-versions.md index 816b406cc76..f17715f3556 100644 --- a/src/current/v24.2/restoring-backups-across-versions.md +++ b/src/current/v24.2/restoring-backups-across-versions.md @@ -29,7 +29,7 @@ Backup taken on version | Restorable into version 22.1.x | 22.1.x, 22.2.x 22.2.x | 22.2.x, 23.1.x -When a cluster is in a mixed-version state during an upgrade, [full cluster restores]({% link {{ page.version.version }}/restore.md %}#restore-a-cluster) will fail. See the [Upgrade documentation]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}) for the necessary steps to finalize your upgrade. For CockroachDB {{ site.data.products.cloud }} clusters, see the [CockroachDB Cloud Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy) page. +When a cluster is in a mixed-version state during an upgrade, [full cluster restores]({% link {{ page.version.version }}/restore.md %}#restore-a-cluster) will fail. See the [Upgrade documentation]({% link {{ page.version.version }}/upgrade-cockroach-version.md %}) for the necessary steps to finalize your upgrade. For CockroachDB {{ site.data.products.cloud }} clusters, see the [CockroachDB Cloud Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}) page. {{site.data.alerts.callout_info}} Cockroach Labs does **not** support restoring backups from a higher version into a lower version. diff --git a/src/current/v24.2/revoke.md b/src/current/v24.2/revoke.md index 9fdc995b2fb..d44d6d373b5 100644 --- a/src/current/v24.2/revoke.md +++ b/src/current/v24.2/revoke.md @@ -73,11 +73,11 @@ SHOW GRANTS ON DATABASE movr; ~~~ ~~~ - database_name | grantee | privilege_type -----------------+---------+----------------- - movr | admin | ALL - movr | max | CREATE - movr | root | ALL + database_name | grantee | privilege_type | is_grantable +----------------+---------+----------------+--------------- + movr | admin | ALL | t + movr | max | CREATE | f + movr | root | ALL | t (3 rows) ~~~ @@ -92,10 +92,10 @@ SHOW GRANTS ON DATABASE movr; ~~~ ~~~ - database_name | grantee | privilege_type -----------------+---------+----------------- - movr | admin | ALL - movr | root | ALL + database_name | grantee | privilege_type | is_grantable +----------------+---------+----------------+--------------- + movr | admin | ALL | t + movr | root | ALL | t (2 rows) ~~~ @@ -107,7 +107,7 @@ Any tables that previously inherited the database-level privileges retain the pr {% include_cached copy-clipboard.html %} ~~~ sql -GRANT DELETE ON TABLE rides TO max; +GRANT ALL ON TABLE rides TO max; ~~~ {% include_cached copy-clipboard.html %} @@ -116,17 +116,17 @@ SHOW GRANTS ON TABLE rides; ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type -----------------+-------------+------------+---------+----------------- - movr | public | rides | admin | ALL - movr | public | rides | max | DELETE - movr | public | rides | root | ALL + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | rides | admin | ALL | t + movr | public | rides | max | ALL | f + movr | public | rides | root | ALL | t (3 rows) ~~~ {% include_cached copy-clipboard.html %} ~~~ sql -REVOKE DELETE ON TABLE rides FROM max; +REVOKE ALL ON TABLE rides FROM max; ~~~ {% include_cached copy-clipboard.html %} @@ -135,10 +135,10 @@ SHOW GRANTS ON TABLE rides; ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type -----------------+-------------+------------+---------+----------------- - movr | public | rides | admin | ALL - movr | public | rides | root | ALL + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | rides | admin | ALL | t + movr | public | rides | root | ALL | t (2 rows) ~~~ @@ -146,7 +146,7 @@ SHOW GRANTS ON TABLE rides; {% include_cached copy-clipboard.html %} ~~~ sql -GRANT CREATE, SELECT, DELETE ON TABLE rides, users TO max; +GRANT ALL ON TABLE rides, users TO max; ~~~ {% include_cached copy-clipboard.html %} @@ -158,70 +158,61 @@ SHOW GRANTS ON TABLE movr.*; database_name | schema_name | table_name | grantee | privilege_type | is_grantable ----------------+-------------+----------------------------+---------+----------------+--------------- movr | public | promo_codes | admin | ALL | t - movr | public | promo_codes | demo | ALL | t movr | public | promo_codes | root | ALL | t movr | public | rides | admin | ALL | t - movr | public | rides | demo | ALL | t - movr | public | rides | max | CREATE | f - movr | public | rides | max | DELETE | f - movr | public | rides | max | SELECT | f + movr | public | rides | max | ALL | f movr | public | rides | root | ALL | t movr | public | user_promo_codes | admin | ALL | t - movr | public | user_promo_codes | demo | ALL | t movr | public | user_promo_codes | root | ALL | t movr | public | users | admin | ALL | t - movr | public | users | demo | ALL | t - movr | public | users | max | CREATE | f - movr | public | users | max | DELETE | f - movr | public | users | max | SELECT | f + movr | public | users | max | ALL | f movr | public | users | root | ALL | t + movr | public | usertable | admin | ALL | t + movr | public | usertable | root | ALL | t movr | public | vehicle_location_histories | admin | ALL | t - movr | public | vehicle_location_histories | demo | ALL | t movr | public | vehicle_location_histories | root | ALL | t movr | public | vehicles | admin | ALL | t - movr | public | vehicles | demo | ALL | t + movr | public | vehicles | public | SELECT | f movr | public | vehicles | root | ALL | t -(24 rows) +(17 rows) ~~~ {% include_cached copy-clipboard.html %} ~~~ sql -REVOKE DELETE ON ALL TABLES IN SCHEMA public FROM max; +REVOKE ALL ON ALL TABLES IN SCHEMA public FROM max; ~~~ This is equivalent to the following syntax: {% include_cached copy-clipboard.html %} ~~~ sql -REVOKE DELETE ON movr.public.* FROM max; +REVOKE ALL ON movr.public.* FROM max; +~~~ + +{% include_cached copy-clipboard.html %} +~~~ sql +SHOW GRANTS ON TABLE movr.*; ~~~ ~~~ database_name | schema_name | table_name | grantee | privilege_type | is_grantable ----------------+-------------+----------------------------+---------+----------------+--------------- movr | public | promo_codes | admin | ALL | t - movr | public | promo_codes | demo | ALL | t movr | public | promo_codes | root | ALL | t movr | public | rides | admin | ALL | t - movr | public | rides | demo | ALL | t - movr | public | rides | max | CREATE | f - movr | public | rides | max | SELECT | f movr | public | rides | root | ALL | t movr | public | user_promo_codes | admin | ALL | t - movr | public | user_promo_codes | demo | ALL | t movr | public | user_promo_codes | root | ALL | t movr | public | users | admin | ALL | t - movr | public | users | demo | ALL | t - movr | public | users | max | CREATE | f - movr | public | users | max | SELECT | f movr | public | users | root | ALL | t + movr | public | usertable | admin | ALL | t + movr | public | usertable | root | ALL | t movr | public | vehicle_location_histories | admin | ALL | t - movr | public | vehicle_location_histories | demo | ALL | t movr | public | vehicle_location_histories | root | ALL | t movr | public | vehicles | admin | ALL | t - movr | public | vehicles | demo | ALL | t + movr | public | vehicles | public | SELECT | f movr | public | vehicles | root | ALL | t -(22 rows) +(15 rows) ~~~ ### Revoke system-level privileges on the entire cluster @@ -234,7 +225,7 @@ For example, the following statement removes the ability to use the [`SET CLUSTE {% include_cached copy-clipboard.html %} ~~~ sql -REVOKE SYSTEM MODIFYCLUSTERSETTING FROM maxroach; +REVOKE SYSTEM MODIFYCLUSTERSETTING FROM max; ~~~ ### Revoke privileges on schemas @@ -255,11 +246,11 @@ SHOW GRANTS ON SCHEMA cockroach_labs; ~~~ ~~~ - database_name | schema_name | grantee | privilege_type -----------------+----------------+---------+----------------- - movr | cockroach_labs | admin | ALL - movr | cockroach_labs | max | ALL - movr | cockroach_labs | root | ALL + database_name | schema_name | grantee | privilege_type | is_grantable +----------------+----------------+---------+----------------+--------------- + movr | cockroach_labs | admin | ALL | t + movr | cockroach_labs | max | ALL | t + movr | cockroach_labs | root | ALL | t (3 rows) ~~~ @@ -274,13 +265,12 @@ SHOW GRANTS ON SCHEMA cockroach_labs; ~~~ ~~~ - database_name | schema_name | grantee | privilege_type -----------------+----------------+---------+----------------- - movr | cockroach_labs | admin | ALL - movr | cockroach_labs | max | GRANT - movr | cockroach_labs | max | USAGE - movr | cockroach_labs | root | ALL -(4 rows) + database_name | schema_name | grantee | privilege_type | is_grantable +----------------+----------------+---------+----------------+--------------- + movr | cockroach_labs | admin | ALL | t + movr | cockroach_labs | max | USAGE | t + movr | cockroach_labs | root | ALL | t +(3 rows) ~~~ ### Revoke privileges on user-defined types @@ -303,32 +293,12 @@ SHOW GRANTS ON TYPE status; ~~~ ~~~ - database_name | schema_name | type_name | grantee | privilege_type -----------------+-------------+-----------+---------+----------------- - movr | public | status | admin | ALL - movr | public | status | max | ALL - movr | public | status | public | USAGE - movr | public | status | root | ALL -(4 rows) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -REVOKE GRANT ON TYPE status FROM max; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW GRANTS ON TYPE status; -~~~ - -~~~ - database_name | schema_name | type_name | grantee | privilege_type -----------------+-------------+-----------+---------+----------------- - movr | public | status | admin | ALL - movr | public | status | max | USAGE - movr | public | status | public | USAGE - movr | public | status | root | ALL + database_name | schema_name | type_name | grantee | privilege_type | is_grantable +----------------+-------------+-----------+---------+----------------+--------------- + movr | public | status | admin | ALL | t + movr | public | status | max | ALL | f + movr | public | status | public | USAGE | f + movr | public | status | root | ALL | t (4 rows) ~~~ @@ -357,7 +327,7 @@ SHOW GRANTS ON ROLE developer; ~~~ role_name | member | is_admin ------------+--------+----------- - developer | abbey | false + developer | abbey | f (1 row) ~~~ @@ -372,9 +342,7 @@ SHOW GRANTS ON ROLE developer; ~~~ ~~~ - role_name | member | is_admin -------------+--------+----------- -(0 rows) +SHOW GRANTS ON ROLE 0 ~~~ ### Revoke the admin option @@ -392,7 +360,7 @@ SHOW GRANTS ON ROLE developer; ~~~ role_name | member | is_admin ------------+--------+----------- - developer | abbey | true + developer | abbey | t (1 row) ~~~ @@ -409,12 +377,10 @@ SHOW GRANTS ON ROLE developer; ~~~ role_name | member | is_admin ------------+--------+----------- - developer | abbey | false + developer | abbey | f (1 row) ~~~ - - ## See also - [Authorization]({% link {{ page.version.version }}/authorization.md %}) diff --git a/src/current/v24.2/schema-design-database.md b/src/current/v24.2/schema-design-database.md index 7130d3c6d88..1c32e8b7418 100644 --- a/src/current/v24.2/schema-design-database.md +++ b/src/current/v24.2/schema-design-database.md @@ -15,7 +15,7 @@ For reference documentation on the `CREATE DATABASE` statement, including additi Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Review the database schema objects]({% link {{ page.version.version }}/schema-design-overview.md %}). ## Create a database diff --git a/src/current/v24.2/schema-design-indexes.md b/src/current/v24.2/schema-design-indexes.md index 6e895315845..7c2392109ce 100644 --- a/src/current/v24.2/schema-design-indexes.md +++ b/src/current/v24.2/schema-design-indexes.md @@ -18,7 +18,7 @@ This page provides best-practice guidance on creating secondary indexes, with a Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Review the database schema objects]({% link {{ page.version.version }}/schema-design-overview.md %}). - [Create a database]({% link {{ page.version.version }}/schema-design-database.md %}). - [Create a user-defined schema]({% link {{ page.version.version }}/schema-design-schema.md %}). diff --git a/src/current/v24.2/schema-design-schema.md b/src/current/v24.2/schema-design-schema.md index 402588aea2b..a526c4cf81f 100644 --- a/src/current/v24.2/schema-design-schema.md +++ b/src/current/v24.2/schema-design-schema.md @@ -15,7 +15,7 @@ For detailed reference documentation on the `CREATE SCHEMA` statement, including Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Review the database schema objects]({% link {{ page.version.version }}/schema-design-overview.md %}). - [Create a database]({% link {{ page.version.version }}/schema-design-database.md %}). diff --git a/src/current/v24.2/schema-design-table.md b/src/current/v24.2/schema-design-table.md index cd18dd9c8e7..e4323a98750 100644 --- a/src/current/v24.2/schema-design-table.md +++ b/src/current/v24.2/schema-design-table.md @@ -15,7 +15,7 @@ For detailed reference documentation on the `CREATE TABLE` statement, including Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Review the database schema objects]({% link {{ page.version.version }}/schema-design-overview.md %}). - [Create a database]({% link {{ page.version.version }}/schema-design-database.md %}). - [Create a user-defined schema]({% link {{ page.version.version }}/schema-design-schema.md %}). diff --git a/src/current/v24.2/schema-design-update.md b/src/current/v24.2/schema-design-update.md index de2b23b1241..a52e01ac4ce 100644 --- a/src/current/v24.2/schema-design-update.md +++ b/src/current/v24.2/schema-design-update.md @@ -11,7 +11,7 @@ This page provides an overview on changing and removing the objects in a databas Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Review the database schema objects]({% link {{ page.version.version }}/schema-design-overview.md %}). - [Create a database]({% link {{ page.version.version }}/schema-design-database.md %}). - [Create a user-defined schema]({% link {{ page.version.version }}/schema-design-schema.md %}). diff --git a/src/current/v24.2/security-reference/authentication.md b/src/current/v24.2/security-reference/authentication.md index 0a474e4efdf..3e577851d51 100644 --- a/src/current/v24.2/security-reference/authentication.md +++ b/src/current/v24.2/security-reference/authentication.md @@ -9,7 +9,7 @@ This page give an overview of CockroachDB's security features for authenticating Instead, you might be looking for: -- [Logging in to the CockroachDB {{ site.data.products.cloud }} web console](https://www.cockroachlabs.com/docs/cockroachcloud/authentication). +- [Logging in to the CockroachDB {{ site.data.products.cloud }} web console]({% link cockroachcloud/authentication.md %}). - [Accessing the DB console on CockroachDB {{ site.data.products.core }} clusters]({% link {{ page.version.version }}/ui-overview.md %}). ## Authentication configuration @@ -106,7 +106,7 @@ This is convenient for quick usage and experimentation, but is not suitable for CockroachDB {{ site.data.products.dedicated }} clusters enforce IP allow-listing, which must be configured through the CockroachDB Cloud Console. -See [Managing Network Authorization for CockroachDB {{ site.data.products.dedicated }}](https://www.cockroachlabs.com/docs/cockroachcloud/network-authorization). +See [Managing Network Authorization for CockroachDB {{ site.data.products.dedicated }}]({% link cockroachcloud/network-authorization.md %}). ### CockroachDB Self-Hosted diff --git a/src/current/v24.2/security-reference/authorization.md b/src/current/v24.2/security-reference/authorization.md index c629db9a2a8..4146b64b602 100644 --- a/src/current/v24.2/security-reference/authorization.md +++ b/src/current/v24.2/security-reference/authorization.md @@ -9,7 +9,7 @@ Authorization, generally, is the control over **who** (users/roles) can perform This page describes authorization of SQL users on particular [CockroachDB database clusters]({% link {{ page.version.version }}/architecture/glossary.md %}#cluster). This is distinct from authorization of CockroachDB {{ site.data.products.cloud }} users on CockroachDB {{ site.data.products.cloud }} organiations. -Learn more: [Overview of the CockroachDB {{ site.data.products.cloud }} authorization model](https://www.cockroachlabs.com/docs/cockroachcloud/authorization#overview-of-the-cockroachdb-cloud-two-level-authorization-model) +Learn more: [Overview of the CockroachDB {{ site.data.products.cloud }} authorization model]({% link cockroachcloud/authorization.md %}#overview-of-the-cockroachdb-cloud-authorization-model) ## Authorization models diff --git a/src/current/v24.2/security-reference/config-secure-hba.md b/src/current/v24.2/security-reference/config-secure-hba.md index f23cc1ce897..c8ae501a2d2 100644 --- a/src/current/v24.2/security-reference/config-secure-hba.md +++ b/src/current/v24.2/security-reference/config-secure-hba.md @@ -20,7 +20,7 @@ Limiting allowed database connections to secure IP addresses reduces the risk th ## Step 1: Provision and access your cluster -[Create your own free CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/create-a-serverless-cluster). +[Create your own free CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/create-a-serverless-cluster.md %}). From the CockroachDB {{ site.data.products.serverless }} Cloud Console, select your new cluster and click the **Connect** button to obtain your connection credentials from the **Connection Info** pane in the CockroachDB Cloud Console. diff --git a/src/current/v24.2/security-reference/encryption.md b/src/current/v24.2/security-reference/encryption.md index 67f4c22d5ef..f988d7e16cd 100644 --- a/src/current/v24.2/security-reference/encryption.md +++ b/src/current/v24.2/security-reference/encryption.md @@ -21,7 +21,7 @@ In addition to this infrastructure-level encryption, CockroachDB {{ site.data.pr Customer-Managed Encryption Keys (CMEK) allow you to protect data at rest in a CockroachDB {{ site.data.products.dedicated }} cluster using a cryptographic key that is entirely within your control, hosted in a supported key-management systems (KMS) platform. This key is called the _CMEK key_. The CMEK key is never present in the cluster. Using the KMS platform's identity access management (IAM) system, you manage CockroachDB's permission to use the key for encryption and decryption. If the key is unavailable, or if CockroachDB no longer has permission to decrypt using the key, the cluster cannot start. To temporarily make the cluster and its data unavailable, such as during a security investigation, you can revoke CockroachDB's access to use the CMEK key or temporarily disable the key within the KMS's infrastructure. To permanently make the cluster's data unavailable, you can delete the CMEK key from the KMS. CockroachDB never has access to the CMEK key materials, and the CMEK key never leaves the KMS. -To learn more, see [Customer-Managed Encryption Keys](https://www.cockroachlabs.com/docs/cockroachcloud/cmek) and [Managing Customer-Managed Encryption Keys (CMEK) for CockroachDB {{ site.data.products.dedicated }}](https://www.cockroachlabs.com/docs/cockroachcloud/managing-cmek). +To learn more, see [Customer-Managed Encryption Keys]({% link cockroachcloud/cmek.md %}) and [Managing Customer-Managed Encryption Keys (CMEK) for CockroachDB {{ site.data.products.dedicated }}]({% link cockroachcloud/managing-cmek.md %}). {{site.data.alerts.callout_success}} When CMEK is enabled, the **Encryption** option appears to be disabled in the [DB Console]({% link {{ page.version.version }}/ui-overview.md %}), because this option refers to [Encryption At Rest (Enterprise)](#encryption-at-rest-enterprise), which is a feature of CockroachDB {{ site.data.products.core }} clusters. @@ -146,7 +146,7 @@ Enabling Encryption at Rest might result in a higher CPU utilization. We estimat ## See also -- [Customer-Managed Encryption Keys (CMEK)](https://www.cockroachlabs.com/docs/cockroachcloud/cmek) +- [Customer-Managed Encryption Keys (CMEK)]({% link cockroachcloud/cmek.md %}) - [Client Connection Parameters]({% link {{ page.version.version }}/connection-parameters.md %}) - [Manual Deployment]({% link {{ page.version.version }}/manual-deployment.md %}) - [Orchestrated Deployment]({% link {{ page.version.version }}/kubernetes-overview.md %}) diff --git a/src/current/v24.2/security-reference/security-overview.md b/src/current/v24.2/security-reference/security-overview.md index 9ba3e7b937f..cd92f215cc5 100644 --- a/src/current/v24.2/security-reference/security-overview.md +++ b/src/current/v24.2/security-reference/security-overview.md @@ -169,7 +169,7 @@ CockroachDB {{ site.data.products.core }} here refers to the situation of a user ✓ ✓ ✓ - GCP Private Service Connect (PSC) (Preview) or VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters + GCP Private Service Connect (PSC) (Preview) or VPC Peering for GCP clusters and AWS PrivateLink for AWS clusters Non-Repudiation @@ -189,4 +189,4 @@ CockroachDB {{ site.data.products.core }} here refers to the situation of a user -1: AWS PrivateLink is in preview for multi-region Serverless clusters, and is not supported for single-region Serverless clusters. Refer to Manage AWS PrivateLink. +1: AWS PrivateLink is in preview for multi-region Serverless clusters, and is not supported for single-region Serverless clusters. Refer to Manage AWS PrivateLink. diff --git a/src/current/v24.2/security-reference/transport-layer-security.md b/src/current/v24.2/security-reference/transport-layer-security.md index 97edd5dfbf5..118b0080660 100644 --- a/src/current/v24.2/security-reference/transport-layer-security.md +++ b/src/current/v24.2/security-reference/transport-layer-security.md @@ -22,7 +22,7 @@ This page provides a conceptual overview of Transport Layer Security (TLS) and t **Learn more:** - [Manage PKI certificates for a CockroachDB deployment with HashiCorp Vault]({% link {{ page.version.version }}/manage-certs-vault.md %}) -- [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters](https://www.cockroachlabs.com/docs/cockroachcloud/client-certs-dedicated) +- [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters]({% link cockroachcloud/client-certs-dedicated.md %}) ## What is Transport Layer Security (TLS)? @@ -164,7 +164,7 @@ PKI for internode communication within CockroachDB {{ site.data.products.dedicat Certificate authentication for SQL clients is available for CockroachDB {{ site.data.products.dedicated }} clusters. -Refer to [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters](https://www.cockroachlabs.com/docs/cockroachcloud/client-certs-dedicated) for procedural information on administering and using client certificate authentication. +Refer to [Certificate Authentication for SQL Clients in CockroachDB Dedicated Clusters]({% link cockroachcloud/client-certs-dedicated.md %}) for procedural information on administering and using client certificate authentication. ## CockroachDB's TLS support and operating modes diff --git a/src/current/v24.2/show-backup.md b/src/current/v24.2/show-backup.md index c615f432098..e529a0a05f5 100644 --- a/src/current/v24.2/show-backup.md +++ b/src/current/v24.2/show-backup.md @@ -27,7 +27,7 @@ Either the `EXTERNALIOIMPLICITACCESS` [system-level privilege]({% link {{ page.v - Using a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3. - Using the [`cockroach nodelocal upload`]({% link {{ page.version.version }}/cockroach-nodelocal-upload.md %}) command. -No special privilege is required for: +No special privilege is required for: - Interacting with an Amazon S3 and Google Cloud Storage resource using `SPECIFIED` credentials. Azure Storage is always `SPECIFIED` by default. - Using [Userfile]({% link {{ page.version.version }}/use-userfile-storage.md %}) storage. @@ -47,6 +47,7 @@ Parameter | Description `SHOW BACKUPS IN collectionURI` | List the backup paths in the given [collection URI]({% link {{ page.version.version }}/backup.md %}#backup-file-urls). [See the example](#view-a-list-of-the-available-full-backup-subdirectories). `SHOW BACKUP FROM subdirectory IN collectionURI` | Show the details of backups in the subdirectory at the given [collection URI]({% link {{ page.version.version }}/backup.md %}#backup-file-urls). Also, use `FROM LATEST in collectionURI` to show the most recent backup. [See the example](#show-the-most-recent-backup). `SHOW BACKUP SCHEMAS FROM subdirectory IN collectionURI` | Show the schema details of the backup in the given [collection URI]({% link {{ page.version.version }}/backup.md %}#backup-file-urls). [See the example](#show-a-backup-with-schemas). +`collectionURI` | The URI for the [backup storage]({% link {{ page.version.version }}/use-cloud-storage.md %}).
    Note that `SHOW BACKUP` does not support listing backups if the [`nodelocal`]({% link {{ page.version.version }}/cockroach-nodelocal-upload.md %}) storage location is a symlink. Cockroach Labs recommends using remote storage for backups. `kv_option_list` | Control the behavior of `SHOW BACKUP` with a comma-separated list of [these options](#options). ### Options @@ -57,7 +58,7 @@ Option | Value | Description `check_files` | N/A | Validate that all files belonging to a backup are in the expected location in storage. See [Validate a backup's files](#validate-a-backups-files) for an example. `debug_ids` | N/A | [Display descriptor IDs](#show-a-backup-with-descriptor-ids) of every object in the backup, including the object's database and parent schema. `encryption_passphrase` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The passphrase used to [encrypt the files]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}) that the `BACKUP` statement generates (the data files and its manifest, containing the backup's metadata). -`kms` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The URI of the cryptographic key stored in a key management service (KMS), or a comma-separated list of key URIs, used to [take and restore encrypted backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}#examples). Refer to [URI Formats]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}#uri-formats). +`kms` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | The URI of the cryptographic key stored in a key management service (KMS), or a comma-separated list of key URIs, used to [take and restore encrypted backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}#examples). Refer to [URI Formats]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}#uri-formats). `incremental_location` | [`STRING`]({% link {{ page.version.version }}/string.md %}) | [List the details of an incremental backup](#show-a-backup-taken-with-the-incremental-location-option) taken with the [`incremental_location` option]({% link {{ page.version.version }}/backup.md %}#incr-location). `privileges` | N/A | List which users and roles had which privileges on each table in the backup. Displays original ownership of the backup. @@ -90,7 +91,7 @@ See [Show a backup with descriptor IDs](#show-a-backup-with-descriptor-ids) for ### View a list of the available full backup subdirectories -To view a list of the available [full backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups) subdirectories, use the following command: +To view a list of the available [full backups]({% link {{ page.version.version }}/take-full-and-incremental-backups.md %}#full-backups) subdirectories, use the following command: {% include_cached copy-clipboard.html %} ~~~ sql diff --git a/src/current/v24.2/show-create-schedule.md b/src/current/v24.2/show-create-schedule.md index 5d4ad87ae11..7a747968aac 100644 --- a/src/current/v24.2/show-create-schedule.md +++ b/src/current/v24.2/show-create-schedule.md @@ -14,7 +14,7 @@ Only members of the [`admin` role]({% link {{ page.version.version }}/security-r ## Synopsis
    -{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/master/grammar_svg/show_create_schedules.html %} +{% remote_include https://raw.githubusercontent.com/cockroachdb/generated-diagrams/{{ page.release_info.crdb_branch_name }}/grammar_svg/show_create_schedules.html %}
    ## Parameters diff --git a/src/current/v24.2/show-grants.md b/src/current/v24.2/show-grants.md index 428ae99ef62..f27469d1d90 100644 --- a/src/current/v24.2/show-grants.md +++ b/src/current/v24.2/show-grants.md @@ -85,14 +85,22 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | relation_name | grantee | privilege_type | is_grantable -----------------+--------------------+-----------------------------------+---------+-----------------+-------------- - movr | crdb_internal | NULL | admin | ALL | true - movr | crdb_internal | NULL | root | ALL | true - movr | crdb_internal | backward_dependencies | public | SELECT | false - movr | crdb_internal | builtin_functions | public | SELECT | false + database_name | schema_name | object_name | object_type | grantee | privilege_type | is_grantable +----------------+--------------------+---------------------------------------------+-------------+---------+----------------+--------------- ... -(365 rows) + movr | public | promo_codes | table | admin | ALL | t + movr | public | promo_codes | table | root | ALL | t + movr | public | rides | table | admin | ALL | t + movr | public | rides | table | root | ALL | t + movr | public | user_promo_codes | table | admin | ALL | t + movr | public | user_promo_codes | table | root | ALL | t + movr | public | users | table | admin | ALL | t + movr | public | users | table | root | ALL | t + movr | public | vehicle_location_histories | table | admin | ALL | t + movr | public | vehicle_location_histories | table | root | ALL | t + movr | public | vehicles | table | admin | ALL | t + movr | public | vehicles | table | root | ALL | t +(609 rows) ~~~ ### Show a specific user or role's grants @@ -113,14 +121,12 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | relation_name | grantee | privilege_type | is_grantable -----------------+--------------------+---------------+---------+-----------------+-------------- - movr | crdb_internal | NULL | max | ALL | true - movr | information_schema | NULL | max | ALL | true - movr | pg_catalog | NULL | max | ALL | true - movr | pg_extension | NULL | max | ALL | true - movr | public | NULL | max | ALL | true -(5 rows) + database_name | schema_name | object_name | object_type | grantee | privilege_type | is_grantable +----------------+-------------+-------------+-------------+---------+----------------+--------------- + movr | NULL | NULL | database | max | ALL | t + movr | public | NULL | schema | public | CREATE | f + movr | public | NULL | schema | public | USAGE | f +(3 rows) ~~~ ### Show grants on databases @@ -133,24 +139,13 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+--------------------+---------+-----------------+-------------- - movr | crdb_internal | admin | ALL | true - movr | crdb_internal | max | ALL | false - movr | crdb_internal | root | ALL | true - movr | information_schema | admin | ALL | true - movr | information_schema | max | ALL | false - movr | information_schema | root | ALL | true - movr | pg_catalog | admin | ALL | true - movr | pg_catalog | max | ALL | false - movr | pg_catalog | root | ALL | true - movr | pg_extension | admin | ALL | true - movr | pg_extension | max | ALL | false - movr | pg_extension | root | ALL | true - movr | public | admin | ALL | true - movr | public | max | ALL | false - movr | public | root | ALL | true -(15 rows) + database_name | grantee | privilege_type | is_grantable +----------------+---------+----------------+--------------- + movr | admin | ALL | t + movr | max | ALL | t + movr | public | CONNECT | f + movr | root | ALL | t +(4 rows) ~~~ **Specific database, specific user or role:** @@ -161,14 +156,11 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+--------------------+---------+-----------------+-------------- - movr | crdb_internal | max | ALL | false - movr | information_schema | max | ALL | false - movr | pg_catalog | max | ALL | false - movr | pg_extension | max | ALL | false - movr | public | max | ALL | false -(5 rows) + database_name | grantee | privilege_type | is_grantable +----------------+---------+----------------+--------------- + movr | max | ALL | t + movr | public | CONNECT | f +(2 rows) ~~~ ### Show grants on tables @@ -186,11 +178,11 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+--------------- - movr | public | users | admin | ALL | true - movr | public | users | max | ALL | true - movr | public | users | root | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | users | admin | ALL | t + movr | public | users | max | ALL | t + movr | public | users | root | ALL | t (3 rows) ~~~ @@ -202,9 +194,9 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+--------------- - movr | public | users | max | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | users | max | ALL | t (1 row) ~~~ @@ -216,21 +208,21 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+----------------------------+---------+-----------------+--------------- - movr | public | promo_codes | admin | ALL | true - movr | public | promo_codes | root | ALL | true - movr | public | rides | admin | ALL | true - movr | public | rides | root | ALL | true - movr | public | user_promo_codes | admin | ALL | true - movr | public | user_promo_codes | root | ALL | true - movr | public | users | admin | ALL | true - movr | public | users | max | ALL | true - movr | public | users | root | ALL | true - movr | public | vehicle_location_histories | admin | ALL | true - movr | public | vehicle_location_histories | root | ALL | true - movr | public | vehicles | admin | ALL | true - movr | public | vehicles | root | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+----------------------------+---------+----------------+--------------- + movr | public | promo_codes | admin | ALL | t + movr | public | promo_codes | root | ALL | t + movr | public | rides | admin | ALL | t + movr | public | rides | root | ALL | t + movr | public | user_promo_codes | admin | ALL | t + movr | public | user_promo_codes | root | ALL | t + movr | public | users | admin | ALL | t + movr | public | users | max | ALL | t + movr | public | users | root | ALL | t + movr | public | vehicle_location_histories | admin | ALL | t + movr | public | vehicle_location_histories | root | ALL | t + movr | public | vehicles | admin | ALL | t + movr | public | vehicles | root | ALL | t (13 rows) ~~~ @@ -242,9 +234,9 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | table_name | grantee | privilege_type | is_grantable -----------------+-------------+------------+---------+-----------------+--------------- - movr | public | users | max | ALL | true + database_name | schema_name | table_name | grantee | privilege_type | is_grantable +----------------+-------------+------------+---------+----------------+--------------- + movr | public | users | max | ALL | t (1 row) ~~~ @@ -268,11 +260,11 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+----------------+---------+-----------------+--------------- - movr | cockroach_labs | admin | ALL | true - movr | cockroach_labs | max | ALL | true - movr | cockroach_labs | root | ALL | true + database_name | schema_name | grantee | privilege_type | is_grantable +----------------+----------------+---------+----------------+--------------- + movr | cockroach_labs | admin | ALL | t + movr | cockroach_labs | max | ALL | t + movr | cockroach_labs | root | ALL | t (3 rows) ~~~ @@ -284,9 +276,9 @@ To list all grants for all users and roles on the current database and its table ~~~ ~~~ - database_name | schema_name | grantee | privilege_type | is_grantable -----------------+----------------+---------+-----------------+--------------- - movr | cockroach_labs | max | ALL | true + database_name | schema_name | grantee | privilege_type | is_grantable +----------------+----------------+---------+----------------+--------------- + movr | cockroach_labs | max | ALL | t (1 row) ~~~ @@ -312,12 +304,12 @@ To show privileges on [user-defined types]({% link {{ page.version.version }}/cr ~~~ ~~~ - database_name | schema_name | type_name | grantee | privilege_type | is_grantable -----------------+-------------+-----------+---------+-----------------+--------------- - movr | public | status | admin | ALL | true - movr | public | status | max | ALL | true - movr | public | status | public | USAGE | true - movr | public | status | root | ALL | true + database_name | schema_name | type_name | grantee | privilege_type | is_grantable +----------------+-------------+-----------+---------+----------------+--------------- + movr | public | status | admin | ALL | t + movr | public | status | max | ALL | t + movr | public | status | public | USAGE | f + movr | public | status | root | ALL | t (4 rows) ~~~ @@ -329,10 +321,11 @@ To show privileges on [user-defined types]({% link {{ page.version.version }}/cr ~~~ ~~~ - database_name | schema_name | type_name | grantee | privilege_type | is_grantable -----------------+-------------+-----------+---------+-----------------+--------------- - movr | public | status | max | ALL | true -(1 row) + database_name | schema_name | type_name | grantee | privilege_type | is_grantable +----------------+-------------+-----------+---------+----------------+--------------- + movr | public | status | max | ALL | t + movr | public | status | public | USAGE | f +(2 rows) ~~~ ### Show grants on user-defined functions @@ -345,10 +338,12 @@ SHOW GRANTS ON FUNCTION num_users; ~~~ ~~~ - database_name | schema_name | function_id | function_signature | grantee | privilege_type | is_grantable -----------------+-------------+-------------+--------------------+---------+----------------+--------------- - movr | public | 100113 | num_users() | root | EXECUTE | t -(1 row) + database_name | schema_name | routine_id | routine_signature | grantee | privilege_type | is_grantable +----------------+-------------+------------+-------------------+---------+----------------+--------------- + movr | public | 100118 | num_users() | admin | ALL | t + movr | public | 100118 | num_users() | public | EXECUTE | f + movr | public | 100118 | num_users() | root | ALL | t +(3 rows) ~~~ ### Show all grants on external connections @@ -410,8 +405,8 @@ SHOW GRANTS ON EXTERNAL CONNECTION my_backup_bucket FOR alice; ~~~ role_name | member | is_admin ------------+--------+----------- - admin | root | true - moderator | max | false + admin | root | t + moderator | max | f (2 rows) ~~~ @@ -425,7 +420,7 @@ SHOW GRANTS ON EXTERNAL CONNECTION my_backup_bucket FOR alice; ~~~ role_name | member | is_admin ------------+--------+----------- - moderator | max | false + moderator | max | f (1 row) ~~~ @@ -439,7 +434,7 @@ SHOW GRANTS ON EXTERNAL CONNECTION my_backup_bucket FOR alice; ~~~ role_name | member | is_admin ------------+--------+----------- - moderator | max | false + moderator | max | f (1 row) ~~~ diff --git a/src/current/v24.2/show-jobs.md b/src/current/v24.2/show-jobs.md index 06f6f156138..df220793598 100644 --- a/src/current/v24.2/show-jobs.md +++ b/src/current/v24.2/show-jobs.md @@ -203,6 +203,8 @@ WITH x AS (SHOW CHANGEFEED JOBS) SELECT * FROM x WHERE status = ('paused'); (1 row) ~~~ +{% include {{ page.version.version }}/cdc/filter-show-changefeed-jobs-columns.md %} + ### Show schema changes You can show just schema change jobs by using `SHOW JOBS` as the data source for a [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}) statement, and then filtering the `job_type` value with the `WHERE` clause: diff --git a/src/current/v24.2/show-types.md b/src/current/v24.2/show-types.md index bb418dc6896..0158e304567 100644 --- a/src/current/v24.2/show-types.md +++ b/src/current/v24.2/show-types.md @@ -7,10 +7,6 @@ docs_area: reference.sql The `SHOW TYPES` statement lists the user-defined [data types]({% link {{ page.version.version }}/data-types.md %}) in the current database. -{{site.data.alerts.callout_info}} -CockroachDB currently only supports [enumerated user-defined types]({% link {{ page.version.version }}/enum.md %}). As a result, [`SHOW ENUMS`]({% link {{ page.version.version }}/show-enums.md %}) and `SHOW TYPES` return the same results. -{{site.data.alerts.end}} - ## Syntax ~~~ @@ -41,10 +37,10 @@ The following example creates a [user-defined type]({% link {{ page.version.vers ~~~ ~~~ - schema | name | value ----------+---------+------------------------------------------- - public | weekday | monday|tuesday|wednesday|thursday|friday - public | weekend | sunday|saturday + schema | name | owner +---------+---------+-------- + public | weekday | root + public | weekend | root (2 rows) ~~~ diff --git a/src/current/v24.2/spatial-tutorial.md b/src/current/v24.2/spatial-tutorial.md index 27a3fef2f71..81ccbbc5fc9 100644 --- a/src/current/v24.2/spatial-tutorial.md +++ b/src/current/v24.2/spatial-tutorial.md @@ -42,7 +42,7 @@ For more information about how this data set is put together, see the [Data set ## Step 2. Start CockroachDB -This tutorial can be accomplished in any CockroachDB cluster running [v20.2](https://www.cockroachlabs.com/docs/releases/v20.2#v20-2-0) or later. +This tutorial can be accomplished in any CockroachDB cluster running [v20.2]({% link releases/v20.2.md %}#v20-2-0) or later. The simplest way to get up and running is with [`cockroach demo`]({% link {{ page.version.version }}/cockroach-demo.md %}), which starts a temporary, in-memory CockroachDB cluster and opens an interactive SQL shell: diff --git a/src/current/v24.2/sso-db-console.md b/src/current/v24.2/sso-db-console.md index 996ada9a495..d747c1f5005 100644 --- a/src/current/v24.2/sso-db-console.md +++ b/src/current/v24.2/sso-db-console.md @@ -28,7 +28,7 @@ This SSO implementation uses the [authorization code grant type](https://tools.i - **CockroachDB cluster**: you must have access to one of the following: - A {{ site.data.products.core }} cluster enabled with a valid [CockroachDB Enterprise license]({% link {{ page.version.version }}/enterprise-licensing.md %}). - - A [CockroachDB {{ site.data.products.dedicated }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/create-your-cluster). + - A [CockroachDB {{ site.data.products.dedicated }} cluster]({% link cockroachcloud/create-your-cluster.md %}). ## Log in to a cluster's DB Console with SSO diff --git a/src/current/v24.2/sso-sql.md b/src/current/v24.2/sso-sql.md index aaf460bc01e..23223d196ea 100644 --- a/src/current/v24.2/sso-sql.md +++ b/src/current/v24.2/sso-sql.md @@ -12,7 +12,7 @@ Cluster single sign-on (SSO) enables users to access the SQL interface of a Cock {{ site.data.products.dedicated }} clusters can provision their users with JWTs via the DB Console. This allows users to authenticate to a cluster by signing in to their IdP (for example, Okta or Google) with a link embedded in the DB Console. This flow provisions a JWT that a user can copy out of the DB Console UI and use in a SQL connection string to authenticate to the cluster. {{site.data.alerts.callout_info}} -Cluster single sign-on for the DB Console is supported on {{ site.data.products.core }} {{ site.data.products.enterprise }} and {{ site.data.products.dedicated }} clusters. {{ site.data.products.serverless }} clusters do not support cluster single sign-on, because they do not have access to the DB Console. However, {{ site.data.products.serverless }} clusters can use [Cluster Single Sign-on (SSO) using `ccloud` and the CockroachDB Cloud Console](https://www.cockroachlabs.com/docs/cockroachcloud/cloud-sso-sql). +Cluster single sign-on for the DB Console is supported on {{ site.data.products.core }} {{ site.data.products.enterprise }} and {{ site.data.products.dedicated }} clusters. {{ site.data.products.serverless }} clusters do not support cluster single sign-on, because they do not have access to the DB Console. However, {{ site.data.products.serverless }} clusters can use [Cluster Single Sign-on (SSO) using `ccloud` and the CockroachDB Cloud Console]({% link cockroachcloud/cloud-sso-sql.md %}). {{site.data.alerts.end}} The page describes how to configure a cluster for cluster single sign-on using JWTs and then how users can authenticate using the JWTs. If you're a user ready to sign in to the DB Console with JWTs, you can skip the configuration section: @@ -119,7 +119,7 @@ You can also view all of your cluster settings in the DB Console. {{ site.data.products.db }} {{ site.data.products.dedicated }} customers: - By default, your cluster's configuration will contain the CockroachDB {{ site.data.products.cloud }}'s own public key, allowing CockroachDB {{ site.data.products.cloud }} to serve as an IdP. This is required for [SSO with `ccloud`](https://www.cockroachlabs.com/docs/cockroachcloud/cloud-sso-sql). When modifying this cluster setting, you must include the CockroachDB {{ site.data.products.cloud }} public key in the key set, or SSO with `ccloud` will no longer work. + By default, your cluster's configuration will contain the CockroachDB {{ site.data.products.cloud }}'s own public key, allowing CockroachDB {{ site.data.products.cloud }} to serve as an IdP. This is required for [SSO with `ccloud`]({% link cockroachcloud/cloud-sso-sql.md %}). When modifying this cluster setting, you must include the CockroachDB {{ site.data.products.cloud }} public key in the key set, or SSO with `ccloud` will no longer work. The public key for {{ site.data.products.db }} can be found at `https://cockroachlabs.cloud/.well-known/openid-configuration`. @@ -229,7 +229,7 @@ Examples: - `https://accounts.google.com 1232316645658094244789 roach` - Maps a single external identity with the hard-coded ID to the [SQL user](https://www.cockroachlabs.com/docs/cockroachcloud/managing-access#manage-sql-users-on-a-cluster) `roach`. + Maps a single external identity with the hard-coded ID to the [SQL user]({% link cockroachcloud/managing-access.md %}#manage-sql-users-on-a-cluster) `roach`. - `https://accounts.google.com /^([9-0]*)$ gcp_\1` diff --git a/src/current/v24.2/start-a-local-cluster.md b/src/current/v24.2/start-a-local-cluster.md index 97dbfb46a23..55d59491f21 100644 --- a/src/current/v24.2/start-a-local-cluster.md +++ b/src/current/v24.2/start-a-local-cluster.md @@ -26,7 +26,7 @@ The store directory is `cockroach-data/` in the same directory as the `cockroach ## Step 1. Start the cluster -This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link v23.1/deploy-cockroachdb-on-premises.md %}?filters=systemd). +This section shows how to start a cluster interactively. In production, operators usually use a process manager like `systemd` to start and manage the `cockroach` process on each node. Refer to [Deploy CockroachDB On-Premises]({% link {{page.version.version}}/deploy-cockroachdb-on-premises.md %}?filters=systemd). 1. Use the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command to start the `node1` in the foreground: diff --git a/src/current/v24.2/stored-procedures.md b/src/current/v24.2/stored-procedures.md index 58ab75a9995..753a650e783 100644 --- a/src/current/v24.2/stored-procedures.md +++ b/src/current/v24.2/stored-procedures.md @@ -2,7 +2,6 @@ title: Stored Procedures summary: A stored procedure consists of PL/pgSQL or SQL statements that can be issued with a single call. toc: true -key: sql-expressions.html docs_area: reference.sql --- diff --git a/src/current/v24.2/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md b/src/current/v24.2/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md index cf316e4d542..3b2ba5f8148 100644 --- a/src/current/v24.2/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md +++ b/src/current/v24.2/stream-a-changefeed-to-a-confluent-cloud-kafka-cluster.md @@ -21,7 +21,7 @@ An overview of the workflow involves creating and connecting the following: You will need the following set up before starting this tutorial: -- A CockroachDB cluster. You can use a CockroachDB {{ site.data.products.cloud }} or CockroachDB {{ site.data.products.core }} cluster. If you are using CockroachDB {{ site.data.products.serverless }} or CockroachDB {{ site.data.products.dedicated }}, see the [Quickstart with CockroachDB](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) guide. For CockroachDB {{ site.data.products.core }} clusters, see the [install]({% link {{ page.version.version }}/install-cockroachdb-mac.md %}) page. +- A CockroachDB cluster. You can use a CockroachDB {{ site.data.products.cloud }} or CockroachDB {{ site.data.products.core }} cluster. If you are using CockroachDB {{ site.data.products.serverless }} or CockroachDB {{ site.data.products.dedicated }}, see the [Quickstart with CockroachDB]({% link cockroachcloud/quickstart.md %}) guide. For CockroachDB {{ site.data.products.core }} clusters, see the [install]({% link {{ page.version.version }}/install-cockroachdb-mac.md %}) page. - A Confluent Cloud account. See Confluent's [Get started](https://www.confluent.io/get-started/) page for details. - The Confluent CLI. See [Install Confluent CLI](https://docs.confluent.io/confluent-cli/current/install.html) to set this up. This tutorial uses v3.3.0 of the Confluent CLI. Note that you can also complete the steps in this tutorial in Confluent's Cloud console. - {% include {{ page.version.version }}/cdc/tutorial-privilege-check.md %} diff --git a/src/current/v24.2/striim.md b/src/current/v24.2/striim.md index ffbba6d55fa..436d7440cbd 100644 --- a/src/current/v24.2/striim.md +++ b/src/current/v24.2/striim.md @@ -31,10 +31,10 @@ This page describes the Striim functionality at a high level. For detailed infor Complete the following items before using Striim: -- Ensure you have a secure, publicly available CockroachDB cluster running the latest **{{ page.version.version }}** [production release](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) that you can use to configure your Striim target. +- Ensure you have a secure, publicly available CockroachDB cluster running the latest **{{ page.version.version }}** [production release]({% link releases/{{ page.version.version }}.md %}), and have created a [SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#sql-users) that you can use to configure your Striim target. - Manually create all schema objects in the target CockroachDB cluster. Although Striim offers a feature called Auto Schema Conversion, we recommend converting and importing your schema before running Striim to ensure that the data populates successfully. - - If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) to convert and export your schema. Ensure that any schema changes are also reflected on your tables. + - If you are migrating from PostgreSQL, MySQL, Oracle, or Microsoft SQL Server, [use the **Schema Conversion Tool**]({% link cockroachcloud/migrations-page.md %}) to convert and export your schema. Ensure that any schema changes are also reflected on your tables. {{site.data.alerts.callout_info}} All tables must have an explicitly defined primary key. For more guidance, see the [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}#schema-design-best-practices). @@ -111,6 +111,6 @@ To perform continuous replication of ongoing changes, create a Striim applicatio ## See also - [Migration Overview]({% link {{ page.version.version }}/migration-overview.md %}) -- [Schema Conversion Tool](https://www.cockroachlabs.com/docs/cockroachcloud/migrations-page) +- [Schema Conversion Tool]({% link cockroachcloud/migrations-page.md %}) - [Change Data Capture Overview]({% link {{ page.version.version }}/change-data-capture-overview.md %}) - [Third-Party Tools Supported by Cockroach Labs]({% link {{ page.version.version }}/third-party-database-tools.md %}) diff --git a/src/current/v24.2/support-resources.md b/src/current/v24.2/support-resources.md index 2001e4a3151..d4ac13b822f 100644 --- a/src/current/v24.2/support-resources.md +++ b/src/current/v24.2/support-resources.md @@ -5,7 +5,7 @@ toc: false docs_area: manage --- -For each major release of CockroachDB, Cockroach Labs provides maintenance support for at least 365 days and assistance support for at least an additional 180 days. For more details, see the [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy). +For each major release of CockroachDB, Cockroach Labs provides maintenance support for at least 365 days and assistance support for at least an additional 180 days. For more details, see the [Release Support Policy]({% link releases/release-support-policy.md %}). If you're having an issue with CockroachDB, you can reach out for support from Cockroach Labs and our community: @@ -13,6 +13,6 @@ If you're having an issue with CockroachDB, you can reach out for support from C - [CockroachDB Community Forum](https://forum.cockroachlabs.com) - [CockroachDB Community Slack](https://cockroachdb.slack.com) - [CockroachDB in StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [File an issue in GitHub](https://www.cockroachlabs.com/docs/{{ page.version.version }}/file-an-issue) +- [File an issue in GitHub]({% link {{ page.version.version }}/file-an-issue.md %}) We also rely on contributions from users like you. If you know how to help users who might be struggling with a problem, we hope you will! diff --git a/src/current/v24.2/take-and-restore-locality-aware-backups.md b/src/current/v24.2/take-and-restore-locality-aware-backups.md index 149ae4adfb0..7044d524d60 100644 --- a/src/current/v24.2/take-and-restore-locality-aware-backups.md +++ b/src/current/v24.2/take-and-restore-locality-aware-backups.md @@ -25,7 +25,7 @@ For a technical overview of how a locality-aware backup works, refer to [Job coo ## Supported products -Locality-aware backups are available in **CockroachDB {{ site.data.products.dedicated }}**, **CockroachDB {{ site.data.products.serverless }}**, and **CockroachDB {{ site.data.products.core }}** clusters when you are running [customer-owned backups](https://www.cockroachlabs.com/docs/cockroachcloud/take-and-restore-customer-owned-backups). For a full list of features on CockroachDB Cloud, refer to [Backup and Restore Overview](https://www.cockroachlabs.com/docs/cockroachcloud/backup-and-restore-overview). +Locality-aware backups are available in **CockroachDB {{ site.data.products.dedicated }}**, **CockroachDB {{ site.data.products.serverless }}**, and **CockroachDB {{ site.data.products.core }}** clusters when you are running [customer-owned backups]({% link cockroachcloud/take-and-restore-customer-owned-backups.md %}). For a full list of features on CockroachDB Cloud, refer to [Backup and Restore Overview]({% link cockroachcloud/backup-and-restore-overview.md %}). {{site.data.alerts.callout_info}} {% include {{ page.version.version }}/backups/serverless-locality-aware.md %} diff --git a/src/current/v24.2/take-full-and-incremental-backups.md b/src/current/v24.2/take-full-and-incremental-backups.md index 907086d756f..563e9d73085 100644 --- a/src/current/v24.2/take-full-and-incremental-backups.md +++ b/src/current/v24.2/take-full-and-incremental-backups.md @@ -249,7 +249,7 @@ A full backup must be present in the `{collectionURI}` in order to take an incre For details on the backup directory structure when taking incremental backups with `incremental_location`, see this [incremental location directory structure](#incremental-location-structure) example. -To take incremental backups that are [stored in the same way as v21.2](https://www.cockroachlabs.com/docs/v21.2/take-full-and-incremental-backups#backup-collections) and earlier, you can use the `incremental_location` option. You can specify the same `collectionURI` with `incremental_location` and the backup will place the incremental backups in a date-based path under the full backup, rather than in the default `/incrementals` directory: +To take incremental backups that are [stored in the same way as v21.2]({% link v21.2/take-full-and-incremental-backups.md %}#backup-collections) and earlier, you can use the `incremental_location` option. You can specify the same `collectionURI` with `incremental_location` and the backup will place the incremental backups in a date-based path under the full backup, rather than in the default `/incrementals` directory: ~~~ sql BACKUP INTO LATEST IN '{collectionURI}' AS OF SYSTEM TIME '-10s' WITH incremental_location = '{collectionURI}'; diff --git a/src/current/v24.2/third-party-monitoring-tools.md b/src/current/v24.2/third-party-monitoring-tools.md index 4f3e96d8f17..3357c50ba9b 100644 --- a/src/current/v24.2/third-party-monitoring-tools.md +++ b/src/current/v24.2/third-party-monitoring-tools.md @@ -9,8 +9,8 @@ CockroachDB is officially integrated with the following third-party monitoring p | CockroachDB Deployment | Platform | Integration | Metrics from | Tutorial | | -------- | -----------------------|------------ | ------------- | -------- | -| CockroachDB {{ site.data.products.dedicated }} | AWS CloudWatch | [AWS CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics?filters=aws-metrics-export) | -| CockroachDB {{ site.data.products.dedicated }} | Datadog | [CockroachDB {{ site.data.products.dedicated }} integration for Datadog](https://docs.datadoghq.com/integrations/cockroachdb_dedicated/) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster](https://www.cockroachlabs.com/docs/cockroachcloud/export-metrics?filters=datadog-metrics-export) | +| CockroachDB {{ site.data.products.dedicated }} | AWS CloudWatch | [AWS CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/export-metrics.md %}?filters=aws-metrics-export) | +| CockroachDB {{ site.data.products.dedicated }} | Datadog | [CockroachDB {{ site.data.products.dedicated }} integration for Datadog](https://docs.datadoghq.com/integrations/cockroachdb_dedicated/) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Export Metrics From a CockroachDB {{ site.data.products.dedicated }} Cluster]({% link cockroachcloud/export-metrics.md %}?filters=datadog-metrics-export) | | CockroachDB {{ site.data.products.core }} | Datadog | [CockroachDB check for Datadog Agent](https://docs.datadoghq.com/integrations/cockroachdb/?tab=host) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Monitor CockroachDB {{ site.data.products.core }} with Datadog]({% link {{ page.version.version }}/datadog.md %}) | | CockroachDB {{ site.data.products.core }} | DBmarlin | [DBmarlin](https://docs.dbmarlin.com/docs/Monitored-Technologies/Databases/cockroachdb) | [`crdb_internal`]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#crdb_internal-system-catalog) | [Monitor CockroachDB {{ site.data.products.core }} with DBmarlin]({% link {{ page.version.version }}/dbmarlin.md %}) | | CockroachDB {{ site.data.products.core }} | Kibana | [CockroachDB module for Metricbeat](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-cockroachdb.html) | [Prometheus endpoint]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#prometheus-endpoint) | [Monitor CockroachDB {{ site.data.products.core }} with Kibana](kibana.html) | @@ -21,4 +21,4 @@ CockroachDB is officially integrated with the following third-party monitoring p - [DB Console Overview]({% link {{ page.version.version }}/ui-overview.md %}) - [Logging Overview]({% link {{ page.version.version }}/logging-overview.md %}) - [Metrics]({% link {{ page.version.version }}/metrics.md %}) -- [Differences in Metrics between Third-Party Monitoring Integrations and DB Console]({% link {{ page.version.version }}/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md %}) \ No newline at end of file +- [Differences in Metrics between Third-Party Monitoring Integrations and DB Console]({% link {{ page.version.version }}/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md %}) diff --git a/src/current/v24.2/transaction-retry-error-reference.md b/src/current/v24.2/transaction-retry-error-reference.md index f7abfe1925e..d50f31bf9f9 100644 --- a/src/current/v24.2/transaction-retry-error-reference.md +++ b/src/current/v24.2/transaction-retry-error-reference.md @@ -135,7 +135,7 @@ The error message for `RETRY_SERIALIZABLE` contains additional information about ``` restart transaction: TransactionRetryWithProtoRefreshError: TransactionRetryError: retry txn (RETRY_SERIALIZABLE - failed preemptive refresh due to conflicting locks on /Table/106/1/918951292305080321/0 [reason=wait_policy] - conflicting txn: meta={id=1b2bf263 key=/Table/106/1/918951292305080321/0 iso=Serializable pri=0.00065863 epo=0 ts=1700512205.521833000,2 min=1700512148.761403000,0 seq=1}): "sql txn" meta={id=07d42834 key=/Table/106/1/918951292305211393/0 iso=Serializable pri=0.01253025 epo=0 ts=1700512229.378453000,2 min=1700512130.342117000,0 seq=2} lock=true stat=PENDING rts=1700512130.342117000,0 wto=false gul=1700512130.842117000,0 SQLSTATE: 40001 -HINT: See: https://www.cockroachlabs.com/docs/v23.2/transaction-retry-error-reference.html#retry_serializable +HINT: See: https://www.cockroachlabs.com/docs/{{ page.version.version }}/transaction-retry-error-reference.html#retry_serializable ``` **Error type:** Serialization error diff --git a/src/current/v24.2/troubleshooting-overview.md b/src/current/v24.2/troubleshooting-overview.md index 2573f1c9fe3..9b21e2725f0 100644 --- a/src/current/v24.2/troubleshooting-overview.md +++ b/src/current/v24.2/troubleshooting-overview.md @@ -16,7 +16,7 @@ If you experience an issue when using CockroachDB, try these steps to resolve th - [Troubleshoot Common Problems]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}) helps you handle errors and troubleshooting problems that may arise during application development. - [Troubleshoot Statement Behavior]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}) helps you with unexpected query results. -- If you are using Cockroach Cloud, see the errors and solutions in [Troubleshoot CockroachDB Cloud](https://www.cockroachlabs.com/docs/cockroachcloud/troubleshooting-page). +- If you are using Cockroach Cloud, see the errors and solutions in [Troubleshoot CockroachDB Cloud]({% link cockroachcloud/troubleshooting-page.md %}). - If you see discrepancies in metrics, refer to [Differences in Metrics between Third-Party Monitoring Integrations and DB Console]({% link {{ page.version.version }}/differences-in-metrics-between-third-party-monitoring-integrations-and-db-console.md %}). diff --git a/src/current/v24.2/ui-overload-dashboard.md b/src/current/v24.2/ui-overload-dashboard.md index 7b3b89f8c67..7f809ac46b4 100644 --- a/src/current/v24.2/ui-overload-dashboard.md +++ b/src/current/v24.2/ui-overload-dashboard.md @@ -5,7 +5,13 @@ toc: true docs_area: reference.db_console --- -The **Overload dashboard** lets you monitor the performance of the parts of your cluster relevant to the cluster's [admission control system]({% link {{ page.version.version }}/admission-control.md %}). This includes CPU usage, the runnable goroutines waiting per CPU, the health of the persistent stores, and the performance of admission control system when it is enabled. +The **Overload dashboard** lets you monitor the performance of the parts of your cluster relevant to the cluster's [admission control system]({% link {{ page.version.version }}/admission-control.md %}). This includes CPU usage, the runnable goroutines waiting per CPU, the health of the persistent stores, and the performance of the admission control system when it is enabled. + +The charts allow you to monitor: + +- Metrics that help determine which resource is constrained, such as IO and CPU, +- Metrics that narrow down which admission control queues have requests waiting, and +- More advanced metrics about the system health, such as the goroutine scheduler and L0 sublevels. To view this dashboard, [access the DB Console]({% link {{ page.version.version }}/ui-overview.md %}#db-console-access), click **Metrics** in the left-hand navigation, and select **Dashboard** > **Overload**. @@ -15,93 +21,113 @@ To view this dashboard, [access the DB Console]({% link {{ page.version.version The **Overload** dashboard displays the following time series graphs: -## CPU percent +## CPU Utilization {% include {{ page.version.version }}/ui/cpu-percent-graph.md %} -## Goroutine Scheduling Latency: 99th percentile +## KV Admission CPU Slots Exhausted Duration Per Second -This graph shows the 99th [percentile](https://wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of scheduling latency for [Goroutines](https://golangbot.com/goroutines/) as tracked by the `cr.node.go.scheduler_latency-p99` metric. +This graph shows the relative time the node had exhausted slots for foreground (regular) CPU work per second of wall time, measured in microseconds/second, as tracked by the `admission.granter.slots_exhausted_duration.kv` metric. Increased slot exhausted duration indicates CPU resource exhaustion. -- In the node view, the graph shows the 99th percentile of scheduling latency for Goroutines on the selected node. +KV admission slots are an internal aspect of the [admission control system]({% link {{ page.version.version }}/admission-control.md %}), and are dynamically adjusted to allow for high CPU utilization, but without causing CPU overload. If the used slots are often equal to the available slots, then the admission control system is queueing work in order to prevent overload. A shortage of KV slots will cause queuing not only at the [KV layer]({% link {{ page.version.version }}/architecture/transaction-layer.md %}), but also at the [SQL layer]({% link {{ page.version.version }}/architecture/sql-layer.md %}), since both layers can be significant consumers of CPU. -- In the cluster view, the graph shows the 99th percentile of scheduling latency for Goroutines across all nodes in the cluster. +- In the node view, the graph shows the admission slots exhausted duration in microseconds/second on the selected node. +- In the cluster view, the graph shows the admission slots exhausted duration in microseconds/second across all nodes in the cluster. -## Runnable Goroutines per CPU +## Admission IO Tokens Exhausted Duration Per Second -{% include {{ page.version.version }}/ui/runnable-goroutines-graph.md %} +This graph shows the relative time the node had exhausted IO tokens for all IO-bound work per second of wall time, measured in microseconds/second, as tracked by the `admission.granter.io_tokens_exhausted_duration.kv` and `admission.granter.elastic_io_tokens_exhausted_duration.kv` metrics. There are separate lines for regular IO exhausted duration and elastic IO exhausted duration. Increased IO token exhausted duration indicates IO resource exhaustion. + +This graph indicates write IO overload, which affects KV write operations to [storage]({% link {{ page.version.version }}/architecture/storage-layer.md %}). The [admission control system]({% link {{ page.version.version }}/admission-control.md %}) dynamically calculates write tokens (similar to a [token bucket](https://wikipedia.org/wiki/Token_bucket)) to allow for high write throughput without severely overloading each store. This graph displays the microseconds per second that there were no write tokens left for arriving write requests. When there are no write tokens, these write requests are queued. + +- In the node view, the graph shows the regular (foreground) IO exhausted duration and the elastic (background) IO exhausted duration in microseconds per second on the selected node. +- In the cluster view, the graph shows the regular (foreground) IO exhausted duration and the elastic (background) IO exhausted duration in microseconds per second across all nodes in the cluster. ## IO Overload -This graph shows the health of the [persistent stores]({% link {{ page.version.version }}/architecture/storage-layer.md %}), which are implemented as log-structured merge (LSM) trees. Level 0 is the highest level of the LSM tree and consists of files containing the latest data written to the [Pebble storage engine]({% link {{ page.version.version }}/cockroach-start.md %}#storage-engine). For more information about LSM levels and how LSMs work, see [Log-structured Merge-trees]({% link {{ page.version.version }}/architecture/storage-layer.md %}#log-structured-merge-trees). +This graph shows a derived score based on [admission control's]({% link {{ page.version.version }}/admission-control.md %}) view of the store, as tracked by the `admission.io.overload` metric. Admission control attempts to maintain a score of 0.5. + +This graph indicates the health of the [persistent stores]({% link {{ page.version.version }}/architecture/storage-layer.md %}), which are implemented as log-structured merge (LSM) trees. Level 0 is the highest level of the LSM tree and consists of files containing the latest data written to the [Pebble storage engine]({% link {{ page.version.version }}/cockroach-start.md %}#storage-engine). For more information about LSM levels and how LSMs work, see [Log-structured Merge-trees]({% link {{ page.version.version }}/architecture/storage-layer.md %}#log-structured-merge-trees). -This graph specifically shows the number of sub-levels and files in Level 0 normalized by admission thresholds, as tracked by the `cr.store.admission.io.overload` metric. The 1-normalized float indicates whether IO admission control considers the store as overloaded with respect to compaction out of Level 0 (considers sub-level and file counts). +This graph specifically shows the number of sublevels and files in Level 0 normalized by admission thresholds. The 1-normalized float indicates whether IO admission control considers the store as overloaded with respect to compaction out of Level 0 (considers sublevel and file counts). -- In the node view, the graph shows the health of the persistent store on the selected node. -- In the cluster view, the graph shows the health of the persistent stores across all nodes in the cluster. +- In the node view, the graph shows the IO overload score on the selected node. +- In the cluster view, the graph shows the IO overload score across all nodes in the cluster. {% include {{ page.version.version }}/prod-deployment/healthy-lsm.md %} -## KV Admission Slots Exhausted +## Elastic CPU Tokens Exhausted Duration Per Second -This graph shows the total duration when KV admission slots were exhausted, in microseconds, as tracked by the `cr.node.admission.granter.slots_exhausted_duration.kv` metric. +This graph shows the relative time the node had exhausted tokens for background (elastic) CPU work per second of wall time, measured in microseconds/second, as tracked by the `admission.elastic_cpu.nanos_exhausted_duration` metric. Increased token exhausted duration indicates CPU resource exhaustion, specifically for background (elastic) work. -KV admission slots are an internal aspect of the admission control system, and are dynamically adjusted to allow for high CPU utilization, but without causing CPU overload. If the used slots are often equal to the available slots, then the admission control system is queueing work in order to prevent overload. A shortage of KV slots will cause queuing not only at the KV layer, but also at the SQL layer, since both layers can be significant consumers of CPU. +- In the node view, the graph shows the elastic CPU exhausted duration in microseconds per second on the selected node. +- In the cluster view, the graph shows the elastic CPU exhausted duration in microseconds per second across all nodes in the cluster. -- In the node view, the graph shows the total duration when KV slots were exhausted on the selected node. -- In the cluster view, the graph shows the total duration when KV slots were exhausted across all nodes in the cluster. +## Admission Queueing Delay p99 – Foreground (Regular) CPU -## KV Admission IO Tokens Exhausted Duration Per Second +This graph shows the 99th percentile latency of requests waiting in the various [admission control]({% link {{ page.version.version }}/admission-control.md %}) CPU queues, as tracked by the `admission.wait_durations.kv-p99`, `admission.wait_durations.sql-kv-response-p99`, and `admission.wait_durations.sql-sql-response-p99` metrics. There are separate lines for KV, SQL-KV response, and SQL-SQL response. -This graph indicates write I/O overload, which affects KV write operations to storage. The admission control system dynamically calculates write tokens (similar to a [token bucket](https://wikipedia.org/wiki/Token_bucket)) to allow for high write throughput without severely overloading each store. This graph displays the microseconds per second that there were no write tokens left for arriving write requests. When there are no write tokens, these write requests are queued. +- In the node view, the graph shows the delay duration for KV, SQL-KV response, and SQL-SQL response on the selected node. +- In the cluster view, the graph shows the delay duration for KV, SQL-KV response, and SQL-SQL response across all nodes in the cluster. -- In the node view, the graph shows the number of microseconds per second that there were no write tokens on the selected node. -- In the cluster view, the graph shows the number of microseconds per second that there were no write tokens across all nodes in the cluster. +## Admission Queueing Delay p99 – Store -## Flow Tokens Wait Time: 75th percentile +This graph shows the 99th percentile latency of requests waiting in the admission control store queue, as tracked by the `admission.wait_durations.kv-stores-p99` and the `admission.wait_durations.elastic-stores-p99` metrics. There are separate lines for KV write and elastic (background) write. -This graph shows the 75th [percentile](https://wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of latency for regular requests and elastic requests spent waiting for flow tokens, as tracked respectively by the `cr.node.kvadmission.flow_controller.regular_wait_duration-p75` and the `cr.node.kvadmission.flow_controller.elastic_wait_duration-p75` metrics. There are separate lines for regular flow token wait time and elastic flow token wait time. +- In the node view, the graph shows the delay duration of KV write and elastic (background) write on the selected node. +- In the cluster view, the graph shows the delay duration of KV write and elastic (background) write across all nodes in the cluster. -- In the node view, the graph shows the 75th percentile of latency for regular requests and elastic requests spent waiting for flow tokens on the selected node. -- In the cluster view, the graph shows the 75th percentile of latency for regular requests and elastic requests spent waiting for flow tokens across all nodes in the cluster. +## Admission Queueing Delay p99 – Background (Elastic) CPU -## Requests Waiting For Flow Tokens +This graph shows the 99th percentile latency of requests waiting in the [admission control]({% link {{ page.version.version }}/admission-control.md %}) elastic CPU queue, as tracked by the `admission.wait_durations.elastic-cpu-p99` metric. -This graph shows the number of regular requests and elastic requests waiting for flow tokens, as tracked respectively by the `cr.node.kvadmission.flow_controller.regular_requests_waiting` and the `cr.node.kvadmission.flow_controller.elastic_requests_waiting` metrics. There are separate lines for regular requests waiting and elastic requests waiting. +- In the node view, the graph shows the delay duration of KV write on the selected node. +- In the cluster view, the graph shows the delay duration of KV write across all nodes in the cluster. -- In the node view, the graph shows the number of regular requests and elastic requests waiting for flow tokens on the selected node. -- In the cluster view, the graph shows the number of regular requests and elastic requests waiting for flow tokens across all nodes in the cluster. +## Admission Queueing Delay p99 – Replication Admission Control + +This graph shows the 99th percentile latency of requests waiting in the replication [admission control]({% link {{ page.version.version }}/admission-control.md %}) queue, as tracked by the `kvadmission.flow_controller.regular_wait_duration-p99` and the `kvadmission.flow_controller.elastic_wait_duration-p99` metrics. There are separate lines for regular flow token wait time and elastic (background) flow token wait time. These metrics are indicative of store overload on replicas. + +- In the node view, the graph shows the wait duration of regular flow token wait time and elastic flow token wait time on the selected node. +- In the cluster view, the graph shows the wait duration of regular flow token wait time and elastic flow token wait time across all nodes in the cluster. ## Blocked Replication Streams -This graph shows the number of replication streams with no flow tokens available for regular requests and elastic requests, as tracked respectively by the `cr.node.kvadmission.flow_controller.regular_blocked_stream_count` and the `cr.node.kvadmission.flow_controller.elastic_blocked_stream_count` metrics. There are separate lines for blocked regular streams and blocked elastic streams. +This graph shows the blocked replication streams per node in replication [admission control]({% link {{ page.version.version }}/admission-control.md %}), separated by admission priority {regular, elastic}, as tracked by the `kvadmission.flow_controller.regular_blocked_stream_count` and the `kvadmission.flow_controller.elastic_blocked_stream_count` metrics. There are separate lines for blocked regular streams and blocked elastic (background) streams. -- In the node view, the graph shows the number of replication streams with no flow tokens available for regular requests and elastic requests on the selected node. -- In the cluster view, the graph shows the number of replication streams with no flow tokens available for regular requests and elastic requests across all nodes in the cluster. +- In the node view, the graph shows the number of blocked regular streams and blocked elastic streams on the selected node. +- In the cluster view, the graph shows the number of blocked regular streams and blocked elastic streams across all nodes in the cluster. + +## Elastic CPU Utilization + +This graph shows the CPU utilization by elastic (background) work, compared to the limit set for elastic work, as tracked by the `admission.elastic_cpu.utilization` and the `admission.elastic_cpu.utilization_limit` metrics. + +- In the node view, the graph shows elastic CPU utilization and elastic CPU utilization limit as percentages on the selected node. +- In the cluster view, the graph shows elastic CPU utilization and elastic CPU utilization limit as percentages across all nodes in the cluster. -## Admission Work Rate +## Goroutine Scheduling Latency: 99th percentile -This graph shows the rate that operations within the admission control system are processed. There are lines for requests within the KV layer, write requests within the KV layer, responses between the KV and SQL layer, and responses within the SQL layer when receiving DistSQL responses. +This graph shows the 99th [percentile](https://wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of scheduling latency for [Goroutines](https://golangbot.com/goroutines/), as tracked by the `go.scheduler_latency-p99` metric. A value above `1ms` here indicates high load that causes background (elastic) CPU work to be throttled. -- In the node view, the graph shows the rate of operations within the work queues on the selected node. -- In the cluster view, the graph shows the rate of operations within the work queues across all nodes in the cluster. +- In the node view, the graph shows the 99th percentile of scheduling latency for Goroutines on the selected node. +- In the cluster view, the graph shows the 99th percentile of scheduling latency for Goroutines across all nodes in the cluster. -## Admission Delay Rate +## Goroutine Scheduling Latency: 99.9th percentile -This graph shows the latency when admitting operations to the work queues within the admission control system. There are lines for requests within the KV layer, write requests within the KV layer, responses between the KV and SQL layer, and responses within the SQL layer when receiving DistSQL responses. +This graph shows the 99.9th [percentile](https://wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of scheduling latency for [Goroutines](https://golangbot.com/goroutines/), as tracked by the `go.scheduler_latency-p99.9` metric. A high value here can be indicative of high tail latency in various queries. -This sums up the delay experienced by operations of each kind, and takes the rate per second. Dividing this rate by the rate observed in the Admission Work Rate graph gives the mean delay experienced per operation. +- In the node view, the graph shows the 99.9th percentile of scheduling latency for Goroutines on the selected node. +- In the cluster view, the graph shows the 99.9th percentile of scheduling latency for Goroutines across all nodes in the cluster. -- In the node view, the graph shows the rate of latency within the work queues on the selected node. -- In the cluster view, the graph shows the rate of latency within the work queues across all nodes in the cluster. +## Runnable Goroutines per CPU -## Admission Delay: 75th percentile +{% include {{ page.version.version }}/ui/runnable-goroutines-graph.md %} -This graph shows the 75th [percentile](https://wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of latency when admitting operations to the work queues within the admission control system. There are lines for requests within the KV layer, write requests within the KV layer, responses between the KV and SQL layer, and responses within the SQL layer when receiving DistSQL responses. +## LSM L0 Sublevels -This 75th percentile is computed over requests that waited in the admission queue. Work that did not wait is not represented on the graph. +This graph shows the number of sublevels in L0 of the LSM, as tracked by the `storage.l0-sublevels` metric. A sustained value above `10` typically indicates that the store is overloaded. For more information about LSM levels and how LSMs work, see [Log-structured Merge-trees]({% link {{ page.version.version }}/architecture/storage-layer.md %}#log-structured-merge-trees). -- In the node view, the graph shows the 75th percentile of latency within the work queues on the selected node. Over the last minute the admission control system admitted 75% of operations within this time. -- In the cluster view, the graph shows the 75th percentile of latency within the work queues across all nodes in the cluster. Over the last minute the admission control system admitted 75% of operations within this time. +- In the node view, the graph shows the number of L0 sublevels on the selected node. +- In the cluster view, the graph shows the number of L0 sublevels across all nodes in the cluster. {% include {{ page.version.version }}/ui/ui-summary-events.md %} diff --git a/src/current/v24.2/ui-overview.md b/src/current/v24.2/ui-overview.md index 5d83e9a805c..1930147952c 100644 --- a/src/current/v24.2/ui-overview.md +++ b/src/current/v24.2/ui-overview.md @@ -11,7 +11,7 @@ The DB Console provides details about your cluster and database configuration, a {{site.data.alerts.callout_info}} Authorized CockroachDB {{ site.data.products.dedicated }} cluster users can visit the DB Console at a URL provisioned for the cluster. -Refer to: [Network Authorization for CockroachDB Cloud Clusters—DB Console](https://www.cockroachlabs.com/docs/cockroachcloud/network-authorization#db-console) +Refer to: [Network Authorization for CockroachDB Cloud Clusters—DB Console]({% link cockroachcloud/network-authorization.md %}#db-console) {{site.data.alerts.end}} ## Authentication diff --git a/src/current/v24.2/update-data.md b/src/current/v24.2/update-data.md index 5c0d256dd29..0c5a33effb8 100644 --- a/src/current/v24.2/update-data.md +++ b/src/current/v24.2/update-data.md @@ -15,7 +15,7 @@ This page has instructions for updating existing rows of data in CockroachDB, us Before reading this page, do the following: -- [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) or [start a local cluster](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart?filters=local). +- [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/quickstart.md %}) or [start a local cluster]({% link cockroachcloud/quickstart.md %}?filters=local). - [Install a Driver or ORM Framework]({% link {{ page.version.version }}/install-client-drivers.md %}). - [Connect to the database]({% link {{ page.version.version }}/connect-to-the-database.md %}). - [Create a database schema]({% link {{ page.version.version }}/schema-design-overview.md %}). diff --git a/src/current/v24.2/upgrade-cockroach-version.md b/src/current/v24.2/upgrade-cockroach-version.md index b327222239c..8f1f1554b2e 100644 --- a/src/current/v24.2/upgrade-cockroach-version.md +++ b/src/current/v24.2/upgrade-cockroach-version.md @@ -6,6 +6,17 @@ docs_area: manage --- {% assign previous_version = site.data.versions | where_exp: "previous_version", "previous_version.major_version == page.version.version" | map: "previous_version" | first %} +{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %} + +{% assign released = false %} +{% assign skippable = false %} +{% if rd.release_date != "N/A" and rd.maint_supp_exp_date != "N/A" %} + {% assign released = true %} + {% if rd.asst_supp_exp_date == "N/A" %} + {% assign skippable = true %} + {% endif %} +{% endif %} + {% assign earliest = site.data.releases | where_exp: "earliest", "earliest.major_version == page.version.version" | sort: "release_date" | first %} {% assign latest = site.data.releases | where_exp: "latest", "latest.major_version == page.version.version" | sort: "release_date" | last %} {% assign prior = site.data.releases | where_exp: "prior", "prior.major_version == page.version.version" | sort: "release_date" | pop | last %} @@ -17,33 +28,32 @@ docs_area: manage Because of CockroachDB's [multi-active availability]({% link {{ page.version.version }}/multi-active-availability.md %}) design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations. -This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**{% if latest.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}. To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead. +This page describes how to upgrade to the latest **{{ page.version.version }}** release, **{{ latest.release_name }}**{% if latest.lts == true %} ([LTS]({% link releases/release-support-policy.md %}#support-types)){% endif %}. -## Terminology +{% include latest-release-details.md %} -Before upgrading, review the CockroachDB [release](../releases/) terminology: +To upgrade CockroachDB on Kubernetes, refer to [single-cluster]({% link {{ page.version.version }}/upgrade-cockroachdb-kubernetes.md %}) or [multi-cluster]({% link {{ page.version.version }}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.md %}#upgrade-the-cluster) instead. -- A new *major release* is performed multiple times per year. The major version number indicates the year of release followed by the release number, starting with 1. For example, the latest major release is {{ actual_latest_prod.major_version }}. -- Each [supported](https://www.cockroachlabs.com/docs/releases/release-support-policy) major release is maintained across *patch releases* that contain improvements including performance or security enhancements and bug fixes. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of {{ actual_latest_prod.major_version }} use the format {{ actual_latest_prod.major_version }}.x. -- All major and patch releases are suitable for production environments, and are therefore considered "production releases". For example, the latest production release is {{ actual_latest_prod.release_name }}. -- Prior to an upcoming major release, alpha, beta, and release candidate (RC) binaries are made available for users who need early access to a feature before it is available in a production release. These releases append the terms `alpha`, `beta`, or `rc` to the version number. These "testing releases" are not suitable for production environments and are not eligible for support or uptime SLA commitments. For more information, refer to the [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy). +## Step 1. Verify that you can upgrade {{site.data.alerts.callout_info}} -There are no "minor releases" of CockroachDB. +A cluster cannot be upgraded from an alpha binary of a prior release or from a binary built from the `master` branch of the CockroachDB source code. {{site.data.alerts.end}} -## Step 1. Verify that you can upgrade +{% if skippable == true %} +CockroachDB {{ page.version.version }} is an optional [Innovation release]({% link releases/release-support-policy.md %}#innovation-releases). If you skip it, you must upgrade to the next [Regular release]({% link releases/release-support-policy.md %}#regular-releases) when it is available to maintain support. +{% else %} +CockroachDB {{ page.version.version }} is a required [Regular release]({% link releases/release-support-policy.md %}#regular-releases). +{% endif %}{% comment %}TODO before next Regular, add logic for multiple targets to upgrade from to get to a Regular release{% endcomment %} -{{site.data.alerts.callout_danger}} -In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the `master` branch cannot subsequently be upgraded to a production release. -{{site.data.alerts.end}} +To verify that you can upgrade: -Run [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}) against any node in the cluster to open the SQL shell. Then check your current cluster version: +1. Run [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}) against any node in the cluster to open the SQL shell. Then check your current cluster version: -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING version; -~~~ + {% include_cached copy-clipboard.html %} + ~~~ sql + > SHOW CLUSTER SETTING version; + ~~~ To upgrade to {{ latest.release_name }}, you must be running{% if prior.release_name %} either{% endif %}: @@ -56,9 +66,9 @@ If you are running any other version, take the following steps **before** contin | Version | Action(s) before upgrading to any {{ page.version.version }} release | |------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Pre-{{ page.version.version }} testing release | Upgrade to a corresponding production release; then upgrade through each subsequent major release, [ending with a {{ previous_version }} production release](https://www.cockroachlabs.com/docs/{{ previous_version }}/upgrade-cockroach-version). | -| Pre-{{ previous_version }} production release | Upgrade through each subsequent major release, [ending with a {{ previous_version }} production release](https://www.cockroachlabs.com/docs/{{ previous_version }}/upgrade-cockroach-version). | -| {{ previous_version}} testing release | [Upgrade to a {{ previous_version }} production release](https://www.cockroachlabs.com/docs/{{ previous_version }}/upgrade-cockroach-version). | +| Pre-{{ page.version.version }} testing release | Upgrade to a corresponding production release, then upgrade through each subsequent major release, [ending with a {{ previous_version }} production release]({% link {{ previous_version }}/upgrade-cockroach-version.md %}). | +| Pre-{{ previous_version }} production release | Upgrade through each subsequent major release, [ending with a {{ previous_version }} production release]({% link {{ previous_version }}/upgrade-cockroach-version.md %}). | +| {{ previous_version}} testing release | [Upgrade to a {{ previous_version }} production release]({% link {{ previous_version }}/upgrade-cockroach-version.md %}). | When you are ready to upgrade to {{ latest.release_name }}, continue to [step 2](#step-2-prepare-to-upgrade). @@ -68,9 +78,9 @@ Before starting the upgrade, complete the following steps. ### Review breaking changes -{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %} -Review the [backward-incompatible changes](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}), [deprecated features](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}), and [key cluster setting changes](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-cluster-settings{% endunless %}) in {{ page.version.version }}. If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. + +Review the [backward-incompatible changes]({% link releases/{{ page.version.version }}.md %}{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}), [deprecated features]({% link releases/{{ page.version.version }}.md %}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}), and [key cluster setting changes]({% link releases/{{ page.version.version }}.md %}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-cluster-settings{% endunless %}) in {{ page.version.version }}. If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. ### Check load balancing @@ -121,7 +131,7 @@ This step is relevant only when upgrading from {{ previous_version }}.x to {{ pa By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain features and performance improvements introduced in {{ page.version.version }}. However, it will no longer be possible to [roll back to {{ previous_version }}](#step-5-roll-back-the-upgrade-optional) if auto-finalization is enabled. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the previous binary and then restore from one of the backups created prior to performing the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in [step 6](#step-6-finish-the-upgrade): -1. [Upgrade to {{ previous_version }}](https://www.cockroachlabs.com/docs/{{ previous_version }}/upgrade-cockroach-version), if you haven't already. +1. [Upgrade to {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroach-version.md %}), if you haven't already. 1. Start the [`cockroach sql`]({% link {{ page.version.version }}/cockroach-sql.md %}) shell against any node in the cluster. @@ -136,11 +146,13 @@ By default, after all nodes are running the new version, the upgrade process wil ### Features that require upgrade finalization -When upgrading from {{ previous_version }} to {{ page.version.version }}, certain features and performance improvements will be enabled only after finalizing the upgrade, including but not limited to: +When upgrading from one major version to another, certain features and performance improvements will be enabled only after finalizing the upgrade. However, when upgrading from {{ previous_version }} to {{ page.version.version }}, all features are available immediately, and no features require finalization. -- {% include v24.1/finalization-required/119894.md version="v24.1" %} +{{site.data.alerts.callout_info}} +Finalization is always required to complete an upgrade. +{{site.data.alerts.end}} -For more details about a given feature, refer to the [CockroachDB v24.1.0 release notes](https://www.cockroachlabs.com/docs/releases/v24.1#v24-1-0). +For more details about a given feature, refer to the [CockroachDB v24.2.0 release notes]({% link releases/v24.2.md %}#v24-2-0). ## Step 4. Perform the rolling upgrade @@ -165,7 +177,7 @@ These steps perform an upgrade to the latest {{ page.version.version }} release, 1. [Drain and shut down the node]({% link {{ page.version.version }}/node-shutdown.md %}#perform-node-shutdown). -1. Visit [What's New in {{ page.version.version }}?](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}) and download the **CockroachDB {{ latest.release_name }} full binary** for your architecture. +1. Visit [What's New in {{ page.version.version }}?]({% link releases/{{ page.version.version }}.md %}) and download the **CockroachDB {{ latest.release_name }} full binary** for your architecture. 1. Extract the archive. In the following instructions, replace `{COCKROACHDB_DIR}` with the path to the extracted archive directory. @@ -183,7 +195,7 @@ These steps perform an upgrade to the latest {{ page.version.version }} release, cp -i {COCKROACHDB_DIR}/cockroach /usr/local/bin/cockroach ~~~ -1. If a cluster has corrupt descriptors, a major-version upgrade cannot be finalized. Automatic descriptor repair is enabled by default in {{ page.version.version }}. After restarting each cluster node on {{ page.version.version }}, monitor the [cluster logs](https://www.cockroachlabs.com/docs/{{ page.version.version }}/logging) for errors. If a descriptor cannot be repaired automatically, [contact support](https://support.cockroachlabs.com/hc) for assistance completing the upgrade. To disable automatic descriptor repair (not generally recommended), set the environment variable `COCKROACH_RUN_FIRST_UPGRADE_PRECONDITION` to `false`. +1. If a cluster has corrupt descriptors, a major-version upgrade cannot be finalized. Automatic descriptor repair is enabled by default in {{ page.version.version }}. After restarting each cluster node on {{ page.version.version }}, monitor the [cluster logs]({% link {{ page.version.version }}/logging.md %}) for errors. If a descriptor cannot be repaired automatically, [contact support](https://support.cockroachlabs.com/hc) for assistance completing the upgrade. To disable automatic descriptor repair (not generally recommended), set the environment variable `COCKROACH_RUN_FIRST_UPGRADE_PRECONDITION` to `false`. 1. Start the node so that it can rejoin the cluster. @@ -316,11 +328,11 @@ In the event of catastrophic failure or corruption, it may be necessary to [rest ## See also -- [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy) +- [Release Support Policy]({% link releases/release-support-policy.md %}) - [View Node Details]({% link {{ page.version.version }}/cockroach-node.md %}) - [Collect Debug Information]({% link {{ page.version.version }}/cockroach-debug-zip.md %}) - [View Version Details]({% link {{ page.version.version }}/cockroach-version.md %}) -- [Release notes for our latest version](https://www.cockroachlabs.com/docs/releases/{{page.version.version}}) +- [Release notes for our latest version]({% link releases/{{page.version.version}}.md %}) [#102961]: https://github.com/cockroachdb/cockroach/pull/102961 [#104265]: https://github.com/cockroachdb/cockroach/pull/104265 diff --git a/src/current/v24.2/upgrade-cockroachdb-kubernetes.md b/src/current/v24.2/upgrade-cockroachdb-kubernetes.md index 802491d2408..e59a58e6918 100644 --- a/src/current/v24.2/upgrade-cockroachdb-kubernetes.md +++ b/src/current/v24.2/upgrade-cockroachdb-kubernetes.md @@ -32,11 +32,11 @@ If you [deployed CockroachDB on Red Hat OpenShift]({% link {{ page.version.versi 1. Verify that you can upgrade. - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](https://www.cockroachlabs.com/docs/releases/) and not a testing release (alpha/beta). + To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release]({% link releases/index.md %}) and not a testing release (alpha/beta). Therefore, in order to upgrade to {{ page.version.version }}, you must be on a production release of {{ previous_version }}. - 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}](https://www.cockroachlabs.com/docs/v21.1/operate-cockroachdb-kubernetes#upgrade-the-cluster). Be sure to complete all the steps. + 1. If you are upgrading to {{ page.version.version }} from a production release earlier than {{ previous_version }}, or from a testing release (alpha/beta), first [upgrade to a production release of {{ previous_version }}]({% link {{ previous_version }}/upgrade-cockroachdb-kubernetes.md %}). Be sure to complete all the steps. 1. Then return to this page and perform a second upgrade to {{ page.version.version }}. @@ -51,7 +51,7 @@ If you [deployed CockroachDB on Red Hat OpenShift]({% link {{ page.version.versi {% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.version.version" | first %} -1. Review the [backward-incompatible changes in {{ page.version.version }}](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features](https://www.cockroachlabs.com/docs/releases/{{ page.version.version }}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. +1. Review the [backward-incompatible changes in {{ page.version.version }}]({% link releases/{{ page.version.version }}.md %}{% unless rd.release_date == "N/A" or rd.release_date > today %}#{{ page.version.version | replace: ".", "-" }}-0-backward-incompatible-changes{% endunless %}) and [deprecated features]({% link releases/{{ page.version.version }}.md %}#{% unless rd.release_date == "N/A" or rd.release_date > today %}{{ page.version.version | replace: ".", "-" }}-0-deprecations{% endunless %}). If any affect your deployment, make the necessary changes before starting the rolling upgrade to {{ page.version.version }}. 1. Change the desired Docker image in the custom resource: diff --git a/src/current/v24.2/user-defined-functions.md b/src/current/v24.2/user-defined-functions.md index bbec370c7de..fbee3e34b91 100644 --- a/src/current/v24.2/user-defined-functions.md +++ b/src/current/v24.2/user-defined-functions.md @@ -2,7 +2,6 @@ title: User-Defined Functions summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts. toc: true -key: sql-expressions.html docs_area: reference.sql --- diff --git a/src/current/v24.2/vault-db-secrets-tutorial.md b/src/current/v24.2/vault-db-secrets-tutorial.md index bb846cfd37e..2f47e18de31 100644 --- a/src/current/v24.2/vault-db-secrets-tutorial.md +++ b/src/current/v24.2/vault-db-secrets-tutorial.md @@ -22,7 +22,7 @@ To follow along with this tutorial you will need the following: - The CockroachDB CLI [installed locally]({% link {{ page.version.version }}/install-cockroachdb-mac.md %}). - The Vault CLI [installed locally](https://www.vaultproject.io/downloads). -- Access to a CockroachDB cluster as [`admin` SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role). This tutorial will use a CockroachDB {{ site.data.products.serverless }} cluster, but you may either [Create a CockroachDB {{ site.data.products.serverless }} cluster](https://www.cockroachlabs.com/docs/cockroachcloud/create-a-serverless-cluster) or [Start a Local Cluster (secure)](start-a-local-cluster.html) in order to follow along. In either case you must have the public CA certificate for your cluster, and a username/password combination for the `root` SQL user (or another SQL user with the [`admin` role]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role). +- Access to a CockroachDB cluster as [`admin` SQL user]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role). This tutorial will use a CockroachDB {{ site.data.products.serverless }} cluster, but you may either [Create a CockroachDB {{ site.data.products.serverless }} cluster]({% link cockroachcloud/create-a-serverless-cluster.md %}) or [Start a Local Cluster (secure)](start-a-local-cluster.html) in order to follow along. In either case you must have the public CA certificate for your cluster, and a username/password combination for the `root` SQL user (or another SQL user with the [`admin` role]({% link {{ page.version.version }}/security-reference/authorization.md %}#admin-role). - Access to a Vault cluster with an admin token. This tutorial will use HashiCorp Cloud Platform, but you may either [spin up a free cluster in HashiCorp Cloud Platform](https://learn.hashicorp.com/collections/vault/cloud) or [start a development cluster locally](https://learn.hashicorp.com/tutorials/vault/getting-started-dev-server). ## Introduction diff --git a/src/current/v24.2/vector.md b/src/current/v24.2/vector.md new file mode 100644 index 00000000000..a07e2c12d1e --- /dev/null +++ b/src/current/v24.2/vector.md @@ -0,0 +1,94 @@ +--- +title: VECTOR +summary: The VECTOR data type stores fixed-length arrays of floating-point numbers, which represent data points in multi-dimensional space. +toc: true +docs_area: reference.sql +--- + +{% include enterprise-feature.md %} + +{{site.data.alerts.callout_info}} +{% include feature-phases/preview.md %} +{{site.data.alerts.end}} + +The `VECTOR` data type stores fixed-length arrays of floating-point numbers, which represent data points in multi-dimensional space. Vector search is often used in AI applications such as Large Language Models (LLMs) that rely on vector representations. + +For details on valid `VECTOR` comparison operators, refer to [Syntax](#syntax). For the list of supported `VECTOR` functions, refer to [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions). + +{{site.data.alerts.callout_info}} +`VECTOR` functionality is compatible with the [`pgvector`](https://github.com/pgvector/pgvector) extension for PostgreSQL. Vector indexing is **not** supported at this time. +{{site.data.alerts.end}} + +## Syntax + +A `VECTOR` value is expressed as an [array]({% link {{ page.version.version }}/array.md %}) of [floating-point numbers]({% link {{ page.version.version }}/float.md %}). The array size corresponds to the number of `VECTOR` dimensions. For example, the following `VECTOR` has 3 dimensions: + +~~~ +[1.0, 0.0, 0.0] +~~~ + +You can specify the dimensions when defining a `VECTOR` column. This will enforce the number of dimensions in the column values. For example: + +~~~ sql +ALTER TABLE foo ADD COLUMN bar VECTOR(3); +~~~ + +The following `VECTOR` comparison operators are valid: + +- `=` (equals). Compare vectors for equality in filtering and conditional queries. +- `<>` (not equal to). Compare vectors for inequality in filtering and conditional queries. +- `<->` (L2 distance). Calculate the Euclidean distance between two vectors, as used in [nearest neighbor search](https://en.wikipedia.org/wiki/Nearest_neighbor_search) and clustering algorithms. +- `<#>` (negative inner product). Calculate the [inner product](https://en.wikipedia.org/wiki/Inner_product_space) of two vectors, as used in similarity searches where the inner product can represent the similarity score. +- `<=>` (cosine distance). Calculate the [cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity) between vectors, such as in text and image similarity measures where the orientation of vectors is more important than their magnitude. + +## Size + +The size of a `VECTOR` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification]({% link {{ page.version.version }}/architecture/storage-layer.md %}#write-amplification) and other considerations may cause significant performance degradation. + +## Functions + +For the list of supported `VECTOR` functions, refer to [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions). + +## Example + +Create a table with a `VECTOR` column, specifying `3` dimensions: + +{% include_cached copy-clipboard.html %} +~~~ sql +CREATE TABLE items ( + category STRING, + vector VECTOR(3), + INDEX (category) +); +~~~ + +Insert some sample data into the table: + +{% include_cached copy-clipboard.html %} +~~~ sql +INSERT INTO items (category, vector) VALUES + ('electronics', '[1.0, 0.0, 0.0]'), + ('electronics', '[0.9, 0.1, 0.0]'), + ('furniture', '[0.0, 1.0, 0.0]'), + ('furniture', '[0.0, 0.9, 0.1]'), + ('clothing', '[0.0, 0.0, 1.0]'); +~~~ + +Use the [`<->` operator](#syntax) to sort values with the `electronics` category by their similarity to `[1.0, 0.0, 0.0]`, based on geographic distance. + +{% include_cached copy-clipboard.html %} +~~~ sql +SELECT category, vector FROM items WHERE category = 'electronics' ORDER BY vector <-> '[1.0, 0.0, 0.0]' LIMIT 5; +~~~ + +~~~ + category | vector +--------------+-------------- + electronics | [1,0,0] + electronics | [0.9,0.1,0] +~~~ + +## See also + +- [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions) +- [Data Types]({% link {{ page.version.version }}/data-types.md %}) \ No newline at end of file diff --git a/src/current/v24.2/work-with-virtual-clusters.md b/src/current/v24.2/work-with-virtual-clusters.md index 49545dfe221..507c01bd8d4 100644 --- a/src/current/v24.2/work-with-virtual-clusters.md +++ b/src/current/v24.2/work-with-virtual-clusters.md @@ -84,8 +84,8 @@ To connect to the system virtual cluster using the DB Console, add the `GET` URL To [grant]({% link {{ page.version.version }}/grant.md %}) access to the system virtual cluster, you must connect to the system virtual cluster as a user with the `admin` role, then grant either of the following to the SQL user: -- The `admin` [role]({% link v23.2/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. -- The `VIEWSYSTEMDATA` [system privilege]({% link v23.2/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `admin` [role]({% link {{page.version.version}}/security-reference/authorization.md %}#admin-role) grants the ability to read and modify system tables and cluster settings on any virtual cluster, including the system virtual cluster. +- The `VIEWSYSTEMDATA` [system privilege]({% link {{page.version.version}}/security-reference/authorization.md %}#supported-privileges) grants the ability to read system tables and cluster settings on any virtual cluster, including the system virtual cluster. To prevent unauthorized access, you should limit the users with access to the system virtual cluster.