-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-database support to cluster mode #1671
base: unstable
Are you sure you want to change the base?
Conversation
This commit introduces multi-database support in cluster mode while maintaining backward compatibility and requiring no API changes. Key features include: - Database-agnostic hashing: The hashing algorithm is unchanged. Identical keys map to the same slot across all databases. No changes to slot calculation. This ensures consistency in key distribution and maintains compatibility with existing single-database setups. - Implementation is fully backward compatible with no API changes. - The core structure remains an array of databases, each containing a list of hashtables (one per slot). Cluster management commands are global commands, except for GETKEYSINSLOT and COUNTKEYSINSLOT, which run in selected-DB context. MIGRATE command operates a selected-db context. Please note that MIGRATE command parameter destination-db is used, when migrating keys they can be migrated to a different database in the target, like in non-cluster mode. Slot migration process changes when multiple databases are used: Iterate through all databases SELECT database keys = GETKEYSINSLOT MIGRATE source target keys Valkey-cli has been updated to support resharding across all databases. Signed-off-by: xbasel <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1671 +/- ##
============================================
+ Coverage 70.97% 71.15% +0.17%
============================================
Files 121 123 +2
Lines 65238 65542 +304
============================================
+ Hits 46305 46638 +333
+ Misses 18933 18904 -29
|
@@ -1728,12 +1714,6 @@ void swapMainDbWithTempDb(serverDb *tempDb) { | |||
void swapdbCommand(client *c) { | |||
int id1, id2; | |||
|
|||
/* Not allowed in cluster mode: we have just DB 0 there. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would that be enough for swapdb to work in cluster mode? What will happen in setup with 2 shards, each responsible for half of slots in db's?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this implementation SWAPDB must be executed in all primary nodes. There are three options:
- Allow SWAPDB and shift responsibility to the user – Risky, non-atomic, can cause temporary inconsistency and data corruption. Needs strong warnings.
- Keep SWAPDB disabled in cluster mode – Safest, avoids inconsistency.
- Make SWAPDB cluster-wide and atomic or – Complex, unclear feasibility.
I think option 2 is the safest bet. @JoBeR007 wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is SWAPDB replicated as a single command? Then it's atomic.
If it's risky, it's risky in standslone mode with replicas too, right?
I think we can allow it. Swapping the data can only be done in some non-realtime workloads anyway I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think risky because of replication and risky because of the need to execute SWAPDB on all primary nodes are unrelated just because as a user you can't control first, but user is the main risk in the second case.
I would keep SWAPDB disabled in cluster mode, if we decide to continue with this implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In cluster mode, consistency is per slot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is SWAPDB replicated as a single command? Then it's atomic.
If it's risky, it's risky in standslone mode with replicas too, right?
I think we can allow it. Swapping the data can only be done in some non-realtime workloads anyway I think.
I don’t think it’s very risky with standalone replicas. The only downside is if SWAPDB propagation to the replica takes time, a client might still access the wrong database. At least the client won’t be able to modify the wrong database, as they can only read.
In cluster mode, the same (logical) DB can be DB0 on one node and DB1 on another, but similar issues already exist today, FLUSHDB on one node doesn’t clear the entire DB since data exists in other slots/nodes. But as you said, consistency is per slot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, FLUSHDB is very similar in this regard. If a failover happens just before this command has been propagated to replicas, it's a big thing, but it's no surprise I think. The client can use WAIT or check replication offset to make sure the FLUSHDB or SWAPDB was successful on the replicas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding this, I think it is not just an issue of Multi-database but is more related to atomic slot migration. If a shard is in a stable state (not undergoing slot migration), then flushdb
/flushall
/swapdb
are safe. However, if slot migration is in progress, it might lead to data inconsistency.
I think this needs to be considered alongside atomic-slot-migration:
- During the ATM process, for slots being migrated, if we encounter
flushall
/flushdb
, we can send a command likeflushslot
orflushslotall
to the target shard - As for
swapdb
, I recommend temporarily prohibiting execution during the ATM process
@PingXie @enjoy-binbin , please also take note of this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense. @murphyjacob4 FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made a comment on the issue about this, but also worth mentioning it's hard to orchestrate SWAPDB
. Even in steady state, flushdb
and flushall
are idempotent (you can send them multiple times) but swapdb isn't. If a command times out on one node, it's hard to reason about if it was successful and how to retry it. I think we should continue to disable SWAPDB
in cluster mode for now, unless we introduce an idempotent way to do the swap.
Signed-off-by: zhaozhao.zz <[email protected]>
Signed-off-by: zhaozhao.zz <[email protected]>
@@ -1102,7 +1110,7 @@ getNodeByQuery(client *c, struct serverCommand *cmd, robj **argv, int argc, int | |||
* NODE <node-id>. */ | |||
int flags = LOOKUP_NOTOUCH | LOOKUP_NOSTATS | LOOKUP_NONOTIFY | LOOKUP_NOEXPIRE; | |||
if ((migrating_slot || importing_slot) && !pubsubshard_included) { | |||
if (lookupKeyReadWithFlags(&server.db[0], thiskey, flags) == NULL) | |||
if (lookupKeyReadWithFlags(c->db, thiskey, flags) == NULL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, I modified it to use c->db
, so for most commands, the key it wants to access can be correctly located. However, some cross-DB commands, such as COPY
, still require additional checks. The ultimate solution is atomic-slot-migration I believe. Once ATM is implemented, the TRYAGAIN
issue will no longer occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that getNodeByQuery doesn't follow selects either, so this might not be the right database. If you for example have:
SELECT 0
GET FOO
SELECT 1
GET FOO
c->db won't be correct here either. COPY and move are also such problems as mentioned. I wonder if there is some way to make this correct without having ATM so we can limit the breakage if you're moving from standalone to cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, c->db can obtain the correct context information. Are you referring to the scenario where the select command is also used within a transaction (MULTI/EXEC)?
I'm happy that we did "Unified db rehash method for both standalone and cluster #12848" when developing kvstore , which made the implementation of multi-database simpler. |
Signed-off-by: xbasel <[email protected]>
Signed-off-by: xbasel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add history to SWAPDB
, SELECT
, MOVE
json files to indicate it's supported since 9.0.
int dbHasNoKeys(void) { | ||
for (int i = 0; i < server.dbnum; i++) { | ||
if (kvstoreSize(server.db[i].keys) != 0) { | ||
return 0; | ||
} | ||
} | ||
return 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should move this to db.c
@@ -196,7 +196,7 @@ proc ::valkey_cluster::__method__masternode_notfor_slot {id slot} { | |||
error "Slot $slot is everywhere" | |||
} | |||
|
|||
proc ::valkey_cluster::__dispatch__ {id method args} { | |||
proc ::valkey_cluster::__dispatch__ {id method args} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid.
@@ -0,0 +1,481 @@ | |||
# Tests multi-databases in cluster mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the legacy clustering system. Ideally this test should be in unit/cluster
@@ -0,0 +1,481 @@ | |||
# Tests multi-databases in cluster mode | |||
|
|||
proc pause {{message "Hit Enter to continue ==> "}} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see you used
Line 1214 in f85c933
proc bp {{s {}}} { |
} | ||
} | ||
} | ||
|
||
} ;# tags | ||
|
||
set ::singledb $old_singledb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth noting we no longer need this constraint, so we might want to consider removing this throughout the codebase except when we are running in external mode.
@@ -1102,7 +1110,7 @@ getNodeByQuery(client *c, struct serverCommand *cmd, robj **argv, int argc, int | |||
* NODE <node-id>. */ | |||
int flags = LOOKUP_NOTOUCH | LOOKUP_NOSTATS | LOOKUP_NONOTIFY | LOOKUP_NOEXPIRE; | |||
if ((migrating_slot || importing_slot) && !pubsubshard_included) { | |||
if (lookupKeyReadWithFlags(&server.db[0], thiskey, flags) == NULL) | |||
if (lookupKeyReadWithFlags(c->db, thiskey, flags) == NULL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that getNodeByQuery doesn't follow selects either, so this might not be the right database. If you for example have:
SELECT 0
GET FOO
SELECT 1
GET FOO
c->db won't be correct here either. COPY and move are also such problems as mentioned. I wonder if there is some way to make this correct without having ATM so we can limit the breakage if you're moving from standalone to cluster.
@@ -1728,12 +1714,6 @@ void swapMainDbWithTempDb(serverDb *tempDb) { | |||
void swapdbCommand(client *c) { | |||
int id1, id2; | |||
|
|||
/* Not allowed in cluster mode: we have just DB 0 there. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made a comment on the issue about this, but also worth mentioning it's hard to orchestrate SWAPDB
. Even in steady state, flushdb
and flushall
are idempotent (you can send them multiple times) but swapdb isn't. If a command times out on one node, it's hard to reason about if it was successful and how to retry it. I think we should continue to disable SWAPDB
in cluster mode for now, unless we introduce an idempotent way to do the swap.
@@ -1,5 +1,12 @@ | |||
start_server {tags {"lazyfree"}} { | |||
test "UNLINK can reclaim memory in background" { | |||
|
|||
# The test framework invokes "flushall", replacing kvstores even if empty. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather we did a sync flushall then in the test framework, so we don't have these random waits all over the place.
This commit introduces multi-database support in cluster mode while maintaining backward compatibility and requiring no API changes. Key features include:
Database-agnostic hashing: The hashing algorithm is unchanged. Identical keys map to the same slot across all databases. No changes to slot calculation. This ensures consistency in key distribution and maintains compatibility with existing single-database setups.
Implementation is fully backward compatible with no API changes.
The core structure remains an array of databases, each containing a list of hashtables (one per slot).
Cluster management commands are global commands, except for GETKEYSINSLOT and COUNTKEYSINSLOT, which run in selected-DB context.
MIGRATE command operates a selected-db context. Please note that MIGRATE command parameter destination-db is used, when migrating keys they can be migrated to a different database in the target, like in non-cluster mode.
Slot migration process changes when multiple databases are used:
Valkey-cli has been updated to support resharding across all databases.
#1319