From 215e456a3005d5450af974f8c17f2fd3ee6c934a Mon Sep 17 00:00:00 2001 From: Igor Lukanin Date: Tue, 10 Dec 2024 16:11:41 +0100 Subject: [PATCH] docs: Clean up, replace outdated "schema" wording with "data model" --- .../data-modeling/using-dynamic-measures.mdx | 4 ++-- .../apis-integrations/semantic-layer-sync.mdx | 4 ++-- .../getting-started-pre-aggregations.mdx | 2 +- .../caching/matching-pre-aggregations.mdx | 2 +- .../visualization-tools/explo.mdx | 2 +- .../data-modeling/dynamic/javascript.mdx | 24 +++++++++---------- .../dynamic/schema-execution-environment.mdx | 9 +++---- docs/pages/product/deployment.mdx | 4 ++-- docs/pages/product/deployment/core.mdx | 4 ++-- .../product/workspace/rollup-designer.mdx | 2 +- docs/pages/reference/cli.mdx | 2 +- 11 files changed, 30 insertions(+), 29 deletions(-) diff --git a/docs/pages/guides/recipes/data-modeling/using-dynamic-measures.mdx b/docs/pages/guides/recipes/data-modeling/using-dynamic-measures.mdx index 784cbb23db903..fccac6be14325 100644 --- a/docs/pages/guides/recipes/data-modeling/using-dynamic-measures.mdx +++ b/docs/pages/guides/recipes/data-modeling/using-dynamic-measures.mdx @@ -21,8 +21,8 @@ statuses from an external API. To calculate the orders percentage distribution, we need to create several [measures](/product/data-modeling/concepts#measures) that refer to each other. But we don't want to manually change the data model for each new -status. To solve this, we will create a -[schema dynamically](/product/data-modeling/dynamic). +status. To solve this, we will create a [data model +dynamically](/product/data-modeling/dynamic). ## Data modeling diff --git a/docs/pages/product/apis-integrations/semantic-layer-sync.mdx b/docs/pages/product/apis-integrations/semantic-layer-sync.mdx index af4da3cfd8bd0..a25622423dc96 100644 --- a/docs/pages/product/apis-integrations/semantic-layer-sync.mdx +++ b/docs/pages/product/apis-integrations/semantic-layer-sync.mdx @@ -292,7 +292,7 @@ When the data model is updated, all configured syncs will automatically run. If data model is updated dynamically and the -[`schemaVersion`][ref-config-schemaversion] configuration option is used to +[`schema_version`][ref-config-schemaversion] configuration option is used to track data model changes, syncs will not automatically run. This behavior is disabled by default. Please contact support to enable running syncs when the data model is updated dynamically for your Cube Cloud account. @@ -399,7 +399,7 @@ on, i.e., your development mode branch, shared branch, or main branch. [ref-config-file]: /product/configuration#configuration-options [ref-config-sls]: /reference/configuration/config#semanticlayersync [ref-config-contexts]: /reference/configuration/config#scheduledrefreshcontexts -[ref-config-schemaversion]: /reference/configuration/config#schemaversion +[ref-config-schemaversion]: /reference/configuration/config#schema_version [ref-workspace-sls]: /workspace/bi-integrations [ref-dev-mode]: /product/workspace/dev-mode [ref-auto-sus]: /product/deployment/cloud/auto-suspension diff --git a/docs/pages/product/caching/getting-started-pre-aggregations.mdx b/docs/pages/product/caching/getting-started-pre-aggregations.mdx index 40f991eb82488..c02ed564009b6 100644 --- a/docs/pages/product/caching/getting-started-pre-aggregations.mdx +++ b/docs/pages/product/caching/getting-started-pre-aggregations.mdx @@ -31,7 +31,7 @@ several orders of magnitude, and ensures subsequent queries can be served by the same condensed dataset if any matching attributes are found. [Pre-aggregations are defined within each cube's data -schema][ref-schema-preaggs], and cubes can have as many pre-aggregations as they +model][ref-schema-preaggs], and cubes can have as many pre-aggregations as they require. The pre-aggregated data is stored in [Cube Store, a dedicated pre-aggregation storage layer][ref-caching-preaggs-cubestore]. diff --git a/docs/pages/product/caching/matching-pre-aggregations.mdx b/docs/pages/product/caching/matching-pre-aggregations.mdx index 1c2e89e407f49..59bfc15c177b5 100644 --- a/docs/pages/product/caching/matching-pre-aggregations.mdx +++ b/docs/pages/product/caching/matching-pre-aggregations.mdx @@ -47,7 +47,7 @@ checked for additivity. - **Does every member of the query exist in the pre-aggregation?** Cube checks that the pre-aggregation contains all dimensions, filter dimensions, and leaf measures from the query. -- **Are any query measures multiplied in the cube's data schema?** Cube checks +- **Are any query measures multiplied in the cube's data model?** Cube checks if any measures are multiplied via a [`one_to_many` relationship][ref-schema-joins-rel] between cubes in the query. - **Does the query specify granularity for its time dimension?** Cube checks diff --git a/docs/pages/product/configuration/visualization-tools/explo.mdx b/docs/pages/product/configuration/visualization-tools/explo.mdx index 0f4e5c1f8f55f..97f8418641735 100644 --- a/docs/pages/product/configuration/visualization-tools/explo.mdx +++ b/docs/pages/product/configuration/visualization-tools/explo.mdx @@ -52,7 +52,7 @@ some tables: To query your data, go to the Dashboards page and click Create Dashboard. Proceed with adding a new dataset -to this dashboard by clicking + (choose the schema you've +to this dashboard by clicking + (choose the data model you've created just a few minutes ago): diff --git a/docs/pages/product/data-modeling/dynamic/javascript.mdx b/docs/pages/product/data-modeling/dynamic/javascript.mdx index 549f475d5e994..0a60df940aa04 100644 --- a/docs/pages/product/data-modeling/dynamic/javascript.mdx +++ b/docs/pages/product/data-modeling/dynamic/javascript.mdx @@ -9,11 +9,11 @@ For similar functionality in YAML, see [Dynamic data models with Jinja and Pytho Cube allows data models to be created on-the-fly using a special -[`asyncModule()`][ref-async-module] function only available in the [schema -execution environment][ref-schema-env]. `asyncModule()` allows registering an +[`asyncModule()`][ref-async-module] function only available in the +[execution environment][ref-schema-env]. `asyncModule()` allows registering an async function to be executed at the end of the data model compile phase so additional definitions can be added. This is often useful in situations where -schema properties can be dynamically updated through an API, for example. +data model properties can be dynamically updated through an API, for example. @@ -91,7 +91,7 @@ generate data models from that data: /product/data-modeling/dynamic/schema-execution-environment#cube-js-globals-cube-and-others ```javascript -// model/cubes/DynamicSchema.js +// model/cubes/DynamicDataModel.js const fetch = require("node-fetch"); import { convertStringPropToFunction, @@ -144,10 +144,10 @@ asyncModule(async () => { }); ``` -## Usage with schemaVersion +## Usage with `schema_version` It is also useful to be able to recompile the data model when there are changes -in the underlying input data. For this purpose, the [`schemaVersion` +in the underlying input data. For this purpose, the [`schema_version` ][link-config-schema-version] value in the `cube.js` configuration options can be specified as an asynchronous function: @@ -156,7 +156,7 @@ be specified as an asynchronous function: module.exports = { schemaVersion: async ({ securityContext }) => { const schemaVersions = await ( - await fetch("http://your-api-endpoint/schemaVersion") + await fetch("http://your-api-endpoint/schema_version") ).json(); return schemaVersions[securityContext.tenantId]; @@ -164,18 +164,18 @@ module.exports = { }; ``` -[link-config-schema-version]: /reference/configuration/config#schemaversion +[link-config-schema-version]: /reference/configuration/config#schema_version ## Usage with COMPILE_CONTEXT The `COMPILE_CONTEXT` global object can also be used in conjunction with async -schema creation to allow for multi-tenant deployments of Cube. +data model creation to allow for multi-tenant deployments of Cube. In an example scenario where all tenants share the same cube, but see different dimensions and measures, you could do the following: ```javascript -// model/cubes/DynamicSchema.js +// model/cubes/DynamicDataModel.js const fetch = require("node-fetch"); import { convertStringPropToFunction, @@ -223,7 +223,7 @@ asyncModule(async () => { When using multiple databases, you'll need to ensure you set the [`data_source`][ref-schema-datasource] property for any asynchronously-created -schemas, as well as ensuring the corresponding database drivers are set up with +data models, as well as ensuring the corresponding database drivers are set up with [`driverFactory()`][ref-config-driverfactory] in your [`cube.js` configuration file][ref-config]. @@ -235,7 +235,7 @@ For an example scenario where data models may use either MySQL or Postgres databases, you could do the following: ```javascript -// model/cubes/DynamicSchema.js +// model/cubes/DynamicDataModel.js const fetch = require("node-fetch"); import { convertStringPropToFunction, diff --git a/docs/pages/product/data-modeling/dynamic/schema-execution-environment.mdx b/docs/pages/product/data-modeling/dynamic/schema-execution-environment.mdx index e58bab2ad20a4..3202b9fddc080 100644 --- a/docs/pages/product/data-modeling/dynamic/schema-execution-environment.mdx +++ b/docs/pages/product/data-modeling/dynamic/schema-execution-environment.mdx @@ -50,8 +50,8 @@ cube(`users`, { Data models cannot access `console.log` due to a separate [VM instance][nodejs-vm] that runs it. Suppose you find yourself writing complex logic for SQL generation that depends on a lot of external input. In that case, -you probably want to introduce a helper service outside of `schema` directory -that you can debug as usual Node.js code. +you probably want to introduce a helper service outside of the [data model +directory][ref-schema-path] that you can debug as usual Node.js code. ## Cube globals (cube and others) @@ -109,8 +109,8 @@ cube(`users`, { ## asyncModule Data models can be externally stored and retrieved through an asynchronous -operation using the `asyncModule()`. For more information, consult the [Dynamic -Schema Creation][ref-dynamic-schemas] page. +operation using the `asyncModule()`. For more information, consult the [dynamic +data model creation][ref-dynamic-schemas]. ## Context symbols transpile @@ -191,3 +191,4 @@ cube(`users`, { [nodejs-require]: https://nodejs.org/api/modules.html#modules_require_id [ref-dynamic-schemas]: /product/data-modeling/dynamic [self-require]: #require +[ref-schema-path]: /reference/configuration/config#schema_path \ No newline at end of file diff --git a/docs/pages/product/deployment.mdx b/docs/pages/product/deployment.mdx index 26f6e89b7610f..404ca89965f3d 100644 --- a/docs/pages/product/deployment.mdx +++ b/docs/pages/product/deployment.mdx @@ -57,7 +57,7 @@ The [Cube Docker image][dh-cubejs] is used for API Instance. API instances can be configured via environment variables or the `cube.js` configuration file, and **must** have access to the data model files (as -specified by [`schemaPath`][ref-conf-ref-schemapath]. +specified by [`schema_path`][ref-conf-ref-schemapath]. ## Refresh Worker @@ -256,5 +256,5 @@ services: [ref-deploy-docker]: /product/deployment/core [ref-config-env]: /reference/configuration/environment-variables [ref-config-js]: /reference/configuration/config -[ref-conf-ref-schemapath]: /reference/configuration/config#schemapath +[ref-conf-ref-schemapath]: /reference/configuration/config#schema_path [gh-pavel]: https://github.com/paveltiunov diff --git a/docs/pages/product/deployment/core.mdx b/docs/pages/product/deployment/core.mdx index 8ec318944caeb..bbbf374656724 100644 --- a/docs/pages/product/deployment/core.mdx +++ b/docs/pages/product/deployment/core.mdx @@ -329,7 +329,7 @@ RUN npm install And this to the `.dockerignore`: ```gitignore -schema +model cube.py cube.js .env @@ -371,7 +371,7 @@ installed with `npm install` reside, and result in errors like this: ```yaml # ... volumes: - - ./schema:/cube/conf/schema + - ./model:/cube/conf/model - ./cube.js:/cube/conf/cube.js # Other necessary files ``` diff --git a/docs/pages/product/workspace/rollup-designer.mdx b/docs/pages/product/workspace/rollup-designer.mdx index d196ba52ee4aa..e59cd43901f94 100644 --- a/docs/pages/product/workspace/rollup-designer.mdx +++ b/docs/pages/product/workspace/rollup-designer.mdx @@ -54,7 +54,7 @@ clicking the ​Settings tab: -Click Add to the Data Schema to add the pre-aggregation to the data +Click Add to the Data Model to add the pre-aggregation to the data model. [ref-caching-preaggs-gs]: /product/caching/getting-started-pre-aggregations diff --git a/docs/pages/reference/cli.mdx b/docs/pages/reference/cli.mdx index 78675bc2e156d..c09346bf0d4b2 100644 --- a/docs/pages/reference/cli.mdx +++ b/docs/pages/reference/cli.mdx @@ -82,7 +82,7 @@ Example of a Cube project with models that fail validation: ```bash{outputLines: 2-16} npx cubejs-cli validate -❌ Cube Schema validation failed +❌ Cube Data Model validation failed Cube Error ---------------------------------------