Skip to content

Commit

Permalink
Merge pull request #122 from Altinity/sphoorti
Browse files Browse the repository at this point in the history
Updated broken webinar links, added internal links, etc.
  • Loading branch information
DougTidwell authored Feb 5, 2025
2 parents 421e6d2 + 66ef987 commit dff6b79
Show file tree
Hide file tree
Showing 5 changed files with 41 additions and 22 deletions.
17 changes: 13 additions & 4 deletions content/en/altinity-kb-schema-design/materialized-views/_index.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,24 @@
---
title: "MATERIALIZED VIEWS"
title: "ClickHouse® MATERIALIZED VIEWS"
linkTitle: "MATERIALIZED VIEWS"
description: >
MATERIALIZED VIEWS
Making the most of this powerful ClickHouse® feature
keywords:
- clickhouse materialized view
- create materialized view clickhouse
---

MATERIALIZED VIEWs in ClickHouse® behave like AFTER INSERT TRIGGER to the left-most table listed in their SELECT statement and never read data from disk. Only rows that are placed to the RAM buffer by INSERT are read.
ClickHouse® MATERIALIZED VIEWs behave like AFTER INSERT TRIGGER to the left-most table listed in their SELECT statement and never read data from disk. Only rows that are placed to the RAM buffer by INSERT are read.

## Useful links

* ClickHouse and the magic of materialized views. Basics explained with examples: [webinar recording](https://altinity.com/webinarspage/2019/6/26/clickhouse-and-the-magic-of-materialized-views)
* ClickHouse Materialized Views Illuminated, Part 1:
* [Blog post](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
* [Webinar recording](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
* [Slides](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
* ClickHouse Materialized Views Illuminated, Part 2:
* [Blog post](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-2)
* [Webinar recording](https://www.youtube.com/watch?v=THDk625DGsQ)
* Everything you should know about materialized views - [annotated presentation](https://den-crane.github.io/Everything_you_should_know_about_materialized_views_commented.pdf)
* Very detailed information about internals: [video](https://youtu.be/ckChUkC3Pns?t=9353)
* One more [presentation](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup47/materialized_views.pdf)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,18 @@
---
title: "Replication and DDL queue problems"
title: "ClickHouse® Replication and DDL queue problems"
linkTitle: "Replication and DDL queue problems"
description: >
This article describes how to detect possible problems in the `replication_queue` and `distributed_ddl_queue` and how to troubleshoot.
Finding and troubleshooting problems in the `replication_queue` and `distributed_ddl_queue`
keywords:
- clickhouse replication
- clickhouse ddl
- clickhouse check replication status
- clickhouse replication queue
---

# How to check replication problems:
# How to check ClickHouse® replication problems:

1. check `system.replicas` first, cluster-wide. It allows to check if the problem is local to some replica or global, and allows to see the exception.
1. Check `system.replicas` first, cluster-wide. It allows to check if the problem is local to some replica or global, and allows to see the exception.
allows to answer the following questions:
- Are there any ReadOnly replicas?
- Is there the connection to zookeeper active?
Expand Down Expand Up @@ -90,7 +95,7 @@ FORMAT TSVRaw;

Sometimes due to crashes, zookeeper split brain problem or other reasons some of the tables can be in Read-Only mode. This allows SELECTS but not INSERTS. So we need to do DROP / RESTORE replica procedure.

Just to be clear, this procedure **will not delete any data**, it will just re-create the metadata in zookeeper with the current state of the ClickHouse replica.
Just to be clear, this procedure **will not delete any data**, it will just re-create the metadata in zookeeper with the current state of the [ClickHouse replica](/altinity-kb-setup-and-maintenance/altinity-kb-data-migration/add_remove_replica/).

```sql
DETACH TABLE table_name; -- Required for DROP REPLICA
Expand Down Expand Up @@ -171,7 +176,7 @@ restore_replica "$@"

### Stuck DDL tasks in the distributed_ddl_queue

Sometimes DDL tasks (the ones that use ON CLUSTER) can get stuck in the `distributed_ddl_queue` because the replicas can overload if multiple DDLs (thousands of CREATE/DROP/ALTER) are executed at the same time. This is very normal in heavy ETL jobs.This can be detected by checking the `distributed_ddl_queue` table and see if there are tasks that are not moving or are stuck for a long time.
Sometimes [DDL tasks](/altinity-kb-setup-and-maintenance/altinity-kb-ddlworker/) (the ones that use ON CLUSTER) can get stuck in the `distributed_ddl_queue` because the replicas can overload if multiple DDLs (thousands of CREATE/DROP/ALTER) are executed at the same time. This is very normal in heavy ETL jobs.This can be detected by checking the `distributed_ddl_queue` table and see if there are tasks that are not moving or are stuck for a long time.

If these DDLs completed in some replicas but failed in others, the simplest way to solve this is to execute the failed command in the missed replicas without ON CLUSTER. If most of the DDLs failed then check the number of unfinished records in `distributed_ddl_queue` on the other nodes, because most probably it will be as high as thousands.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,16 @@
title: "Add/Remove a new replica to a ClickHouse® cluster"
linkTitle: "add_remove_replica"
description: >
How to add/remove a new replica manually and using clickhouse-backup
How to add/remove a new ClickHouse replica manually and using `clickhouse-backup`
keywords:
- clickhouse replica
- clickhouse add replica
- clickhouse remove replica
---

## ADD nodes/replicas to a ClickHouse® cluster

To add some replicas to an existing cluster if -30TB then better to use replication:
To add some ClickHouse® replicas to an existing cluster if -30TB then better to use replication:

- don’t add the `remote_servers.xml` until replication is done.
- Add these files and restart to limit bandwidth and avoid saturation (70% total bandwidth):
Expand Down Expand Up @@ -94,7 +98,7 @@ clickhouse-client --host localhost --port 9000 -mn < schema.sql

### Using `clickhouse-backup`

- Using `clickhouse-backup` to copy the schema of a replica to another is also convenient and moreover if using Atomic database with `{uuid}` macros in ReplicatedMergeTree engines:
- Using `clickhouse-backup` to copy the schema of a replica to another is also convenient and moreover if [using Atomic database](/engines/altinity-kb-atomic-database-engine/) with `{uuid}` macros in [ReplicatedMergeTree engines](https://www.youtube.com/watch?v=oHwhXc0re6k):

```bash
sudo -u clickhouse clickhouse-backup --schema --rbac create_remote full-replica
Expand Down Expand Up @@ -139,7 +143,7 @@ already exists. (REPLICA_ALREADY_EXISTS) (version 23.5.3.24 (official build)). (
(query: CREATE TABLE IF NOT EXISTS xxxx.yyyy UUID '3c3503c3-ed3c-443b-9cb3-ef41b3aed0a8'
```

The DDLs have been executed and some tables have been created and after that dropped but some left overs are left in ZK:
[The DDLs](/altinity-kb-setup-and-maintenance/altinity-kb-check-replication-ddl-queue/) have been executed and some tables have been created and after that dropped but some left overs are left in ZK:
- If databases can be dropped then use `DROP DATABASE xxxxx SYNC`
- If databases cannot be dropped use `SYSTEM DROP REPLICA ‘replica_name’ FROM db.table`

Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: "Server config files"
title: "Server configuration files"
linkTitle: "Server config files"
description: >
How to manage server config files in ClickHouse®
How to organize configuration files in ClickHouse® and how to manage changes
keywords:
- clickhouse config.xml
- clickhouse configuration
weight: 105
---

## Сonfig management (recommended structure)
Expand All @@ -16,7 +17,7 @@ By default they are stored in the folder **/etc/clickhouse-server/** in two file

We suggest never change vendor config files and place your changes into separate .xml files in sub-folders. This way is easier to maintain and ease ClickHouse upgrades.

**/etc/clickhouse-server/users.d** – sub-folder for user settings (derived from `users.xml` filename).
**/etc/clickhouse-server/users.d** – sub-folder for [user settings](/altinity-kb-setup-and-maintenance/rbac/) (derived from `users.xml` filename).

**/etc/clickhouse-server/config.d** – sub-folder for server settings (derived from `config.xml` filename).

Expand Down Expand Up @@ -84,7 +85,7 @@ cat /etc/clickhouse-server/users.d/memory_usage.xml
</clickhouse>
```

BTW, you can define any macro in your configuration and use them in Zookeeper paths
BTW, you can define any macro in your configuration and use them in [Zookeeper](https://docs.altinity.com/operationsguide/clickhouse-zookeeper/zookeeper-installation/) paths

```xml
ReplicatedMergeTree('/clickhouse/{cluster}/tables/my_table','{replica}')
Expand Down Expand Up @@ -182,7 +183,7 @@ The list of user setting which require server restart:

See also `select * from system.settings where description ilike '%start%'`

Also there are several 'long-running' user sessions which are almost never restarted and can keep the setting from the server start (it's DDLWorker, Kafka, and some other service things).
Also there are several 'long-running' user sessions which are almost never restarted and can keep the setting from the server start (it's DDLWorker, [Kafka](https://altinity.com/blog/kafka-engine-the-story-continues), and some other service things).

## Dictionaries

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
title: "Suspiciously many broken parts"
linkTitle: "Suspiciously many broken parts"
description: >
Suspiciously many broken parts error during the server startup.
Debugging a common error message
keywords:
- clickhouse broken parts
- clickhouse too many parts
- clickhouse too many broken parts
---

## Symptom:
Expand All @@ -28,7 +28,7 @@ Why data could be corrupted?

## Action:

1. If you are ok to accept the data loss: set up `force_restore_data` flag and clickhouse will move the parts to detached. Data loss is possible if the issue is a result of misconfiguration (i.e. someone accidentally has fixed xml configs with incorrect shard/replica macros, data will be moved to detached folder and can be recovered).
1. If you are ok to accept the [data loss](/altinity-kb-setup-and-maintenance/recovery-after-complete-data-loss/): set up `force_restore_data` flag and clickhouse will move the parts to detached. Data loss is possible if the issue is a result of misconfiguration (i.e. someone accidentally has fixed xml configs with incorrect [shard/replica macros](https://altinity.com/webinarspage/deep-dive-on-clickhouse-sharding-and-replication), data will be moved to detached folder and can be recovered).

```bash
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
Expand Down

0 comments on commit dff6b79

Please sign in to comment.