Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: max pool voting power #211

Merged
merged 16 commits into from
Dec 20, 2024
Merged

feat: max pool voting power #211

merged 16 commits into from
Dec 20, 2024

Conversation

troykessler
Copy link
Member

@troykessler troykessler commented Dec 18, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced maximum voting power per pool, enhancing the staking mechanism.
    • Added a new error for pools with zero delegation.
  • Bug Fixes

    • Added test cases for various scenarios, including effective stake evaluations and pool validations.
  • Documentation

    • Updated CHANGELOG.md to reflect new features and bug fixes.
  • Tests

    • Enhanced test coverage for effective stakes and voting power constraints across multiple scenarios.
    • Added tests for invalid bundles and scenarios involving stake fractions and foreign delegations.

# Conflicts:
#	x/bundles/keeper/keeper_suite_invalid_bundles_test.go
#	x/bundles/keeper/logic_bundles.go
#	x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go
#	x/bundles/keeper/logic_round_robin.go
#	x/bundles/keeper/logic_round_robin_test.go
#	x/bundles/types/expected_keepers.go
#	x/stakers/keeper/exported_functions.go
#	x/stakers/keeper/getters_params.go
#	x/stakers/keeper/getters_stake_fraction.go
#	x/stakers/keeper/getters_valaccount.go
#	x/stakers/keeper/logic_stakers.go
#	x/stakers/keeper/msg_server_join_pool.go
#	x/stakers/keeper/msg_server_join_pool_test.go
#	x/stakers/keeper/msg_server_update_stake_fraction_test.go
#	x/stakers/types/events.pb.go
@troykessler troykessler self-assigned this Dec 18, 2024
Copy link

coderabbitai bot commented Dec 18, 2024

Walkthrough

This pull request introduces significant changes to the KYVE blockchain's staking and bundle validation mechanisms. The modifications primarily focus on enhancing the voting power calculation, stake fraction handling, and pool delegation logic. Key changes include adding a new stake_fraction field to the EventSlash message, updating methods for calculating total stake, and implementing a maximum voting power per pool constraint. These changes aim to improve the robustness and fairness of the validator selection and reward distribution processes.

Changes

File Change Summary
proto/kyve/stakers/v1beta1/events.proto Added stake_fraction field to EventSlash message with detailed comments. Updated comments for existing fields.
x/bundles/types/expected_keepers.go Updated StakerKeeper interface with new methods for stake and voting power management. Removed old delegation-related methods.
x/stakers/keeper/exported_functions.go Renamed and added methods for stake calculation and voting power assessment. Replaced GetDelegationOfPool with GetTotalStakeOfPool.
Multiple test files Updated test cases to reflect new stake and voting power calculation logic, including adjustments for maximum voting power per pool.

Possibly related PRs

  • fix: added test case for points #207: The changes in this PR involve adding a test case for points, which may relate to the overall functionality of staking and slashing mechanisms, but it does not directly connect to the modifications made to the EventSlash message in the main PR.

Suggested reviewers

  • shifty11
  • mbreithecker

Poem

🐰 Hop, hop, stake by stake,
A rabbit's tale of blockchain's fate
Voting power now constrained with care
Fairness blooms everywhere
Dance of validators, precise and light! 🌟

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
x/stakers/keeper/getters_stake_fraction.go (1)

Line range hint 68-72: Critical: Incorrect store prefix for index deletion

There's a mismatch in the store prefixes between setting and removing stake fraction entries. The index deletion should use StakeFractionChangeKeyPrefixIndex2 to match the prefix used in SetStakeFractionChangeEntry.

Apply this fix:

-  indexStore := prefix.NewStore(storeAdapter, types.StakeFractionChangeEntryKeyPrefix)
+  indexStore := prefix.NewStore(storeAdapter, types.StakeFractionChangeKeyPrefixIndex2)

This ensures that index entries are properly cleaned up when removing stake fraction entries.

🧹 Nitpick comments (11)
x/bundles/keeper/logic_round_robin_test.go (2)

Line range hint 40-49: Consider validating stakeFraction range.
Currently, the function does not explicitly check whether stakeFraction is below zero or above one. Implementing boundary checks and returning an error for invalid fractions would improve robustness.


256-268: Rounding test suggests correct integer distribution.
However, consider testing marginal scenarios with fractional remainders to ensure the system handles boundary rounding elegantly.

x/stakers/keeper/exported_functions.go (1)

132-245: Core function GetValidatorPoolStakes.
Implements a sophisticated distribution mechanism for stake capping, including cutoffs and redistributions. The final scale-down approach ensures no validator exceeds its allocated stake. Code is logically sound but quite complex.

x/bundles/keeper/keeper_suite_invalid_bundles_test.go (1)

92-96: PerformValidityChecks after each test.
This helps catch any internal state inconsistencies across test runs. Good hygiene.

util/arrays.go (1)

42-52: Implement a dedicated test for RemoveDuplicateStrings.
The function design is straightforward: it uses a map to filter duplicates while preserving the insertion order. To ensure correctness and performance, consider adding a small unit test with various scenarios (e.g., a single-element array, all duplicates, no duplicates, or mixed data).

x/bundles/types/expected_keepers.go (2)

44-46: Consider adding documentation for voting power calculation

The new methods would benefit from detailed documentation explaining:

  1. How total stake is calculated
  2. What constitutes "too high" voting power
  3. The implications of these checks on pool operations

Add documentation like:

+// GetTotalStakeOfPool returns the total stake for a pool
 GetTotalStakeOfPool(ctx sdk.Context, poolId uint64) (totalStake uint64)
+// GetValidatorPoolStakes returns a mapping of validator addresses to their stakes
 GetValidatorPoolStakes(ctx sdk.Context, poolId uint64, mustIncludeStakers ...string) map[string]uint64
+// IsVotingPowerTooHigh checks if a pool's voting power exceeds the maximum allowed
+// Returns true if the voting power is above the threshold defined in pool parameters
 IsVotingPowerTooHigh(ctx sdk.Context, poolId uint64) bool

44-46: Consider adding validation for edge cases

The new stake-based system should handle edge cases:

  1. Empty pools
  2. Single validator pools
  3. Stake changes near the voting power threshold

Consider implementing additional helper methods in the interface:

// Suggested additional methods
GetMinRequiredStake(ctx sdk.Context, poolId uint64) uint64
GetMaxAllowedStake(ctx sdk.Context, poolId uint64) uint64
proto/kyve/stakers/v1beta1/events.proto (1)

99-108: Documentation improvement suggestion for stake_fraction field

The changes look good! The improved field documentation and addition of stake_fraction enhance the event's clarity. However, consider making the stake_fraction comment more precise.

Consider this documentation improvement:

-  // stake_fraction is the percentage of how much of the validators total
-  // bonded amount was under risk for slashing
+  // stake_fraction is the decimal percentage (between 0 and 1) of the validator's total
+  // bonded amount that was at risk for slashing
x/bundles/keeper/logic_round_robin.go (1)

Line range hint 1-40: Update mathematical formula documentation to use consistent terminology.

The documentation uses "stake (+ delegation)" and variables like $s(n, r)$ which are described as "stake (+ delegation)", while the implementation now consistently uses "stake". Consider updating the documentation to use consistent terminology.

x/stakers/keeper/keeper_suite_effective_stake_test.go (1)

62-593: Consider adding more edge cases to strengthen test coverage.

While the test suite is comprehensive, consider adding these scenarios:

  1. Test with extremely large stake values to verify no overflow issues
  2. Test with minimum possible stake values
  3. Test rapid join/leave scenarios to verify stake calculations remain accurate
x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (1)

402-402: Method replacement aligns with architectural changes.

The replacement of GetDelegationOfPool with GetTotalStakeOfPool across multiple test assertions is consistent with the architectural shift from delegation-based to total stake calculations.

Also applies to: 579-579, 642-642, 698-698, 740-740, 870-870, 1031-1031, 1119-1119, 1202-1202, 1368-1368

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aab9233 and ef69783.

⛔ Files ignored due to path filters (1)
  • x/stakers/types/events.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (26)
  • proto/kyve/stakers/v1beta1/events.proto (1 hunks)
  • testutil/integration/checks.go (2 hunks)
  • util/arrays.go (1 hunks)
  • x/bundles/keeper/keeper_suite_invalid_bundles_test.go (17 hunks)
  • x/bundles/keeper/keeper_suite_stakers_leave_test.go (6 hunks)
  • x/bundles/keeper/keeper_suite_valid_bundles_test.go (3 hunks)
  • x/bundles/keeper/keeper_suite_zero_delegation_test.go (7 hunks)
  • x/bundles/keeper/logic_bundles.go (3 hunks)
  • x/bundles/keeper/logic_bundles_test.go (2 hunks)
  • x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (13 hunks)
  • x/bundles/keeper/logic_round_robin.go (1 hunks)
  • x/bundles/keeper/logic_round_robin_test.go (5 hunks)
  • x/bundles/types/errors.go (1 hunks)
  • x/bundles/types/expected_keepers.go (1 hunks)
  • x/query/keeper/grpc_query_pool.go (1 hunks)
  • x/query/keeper/helper.go (2 hunks)
  • x/stakers/keeper/exported_functions.go (8 hunks)
  • x/stakers/keeper/getters_params.go (0 hunks)
  • x/stakers/keeper/getters_stake_fraction.go (1 hunks)
  • x/stakers/keeper/keeper_suite_effective_stake_test.go (1 hunks)
  • x/stakers/keeper/logic_stakers.go (1 hunks)
  • x/stakers/keeper/msg_server_join_pool.go (0 hunks)
  • x/stakers/keeper/msg_server_join_pool_test.go (20 hunks)
  • x/stakers/keeper/msg_server_leave_pool_test.go (7 hunks)
  • x/stakers/keeper/msg_server_update_stake_fraction_test.go (21 hunks)
  • x/stakers/types/expected_keepers.go (1 hunks)
💤 Files with no reviewable changes (2)
  • x/stakers/keeper/getters_params.go
  • x/stakers/keeper/msg_server_join_pool.go
🔇 Additional comments (91)
x/bundles/keeper/logic_round_robin_test.go (7)

25-26: Great addition of stake fraction and voting power cap tests.
These changes expand the test scenario coverage to include more realistic staking distributions and ensure the maximum voting power constraint is validated.


37-38: Refactor ensures a uniform approach to stake fraction usage.
By internally calling joinDummyWithStakeFraction, you avoid duplicating logic when handling different stake ratios.


103-106: MaxVotingPowerPerPool initialization confirmed.
The parameter is set to "1", which aligns with your test scenario for maximum voting power. Ensure that all tests cover boundary scenarios.


232-237: New test: “Frequency analysis with stake fractions.”
This test scenario effectively demonstrates how stakers with different stake fractions are handled in round-robin selection. Keep verifying edge cases with extremely small or large fractional stakes.


238-255: Loop-based voting frequency checks are thorough.
The step-by-step test of small, medium, and large iteration counts validates the correctness of the approach over repeated cycles. No issues found.


269-269: No action needed here.


270-313: Maximum voting power cap test coverage.
Setting the MaxVotingPowerPerPool to "0.5" ensures any large single staker is capped. The test logic is robust, reflecting the correct re-weighting of other stakers.

x/stakers/keeper/exported_functions.go (10)

Line range hint 5-17: Refactored import statements and new stakertypes usage.
Using stakertypes clarifies ownership of constants and errors. No issues found.


30-30: Event emission using stakertypes.EventLeavePool.
The typed event usage ensures consistent logging.


68-72: Updated error handling for unauthorized valaccount.
Returning stakertypes.ErrValaccountUnauthorized is consistent with the new staker types module.


84-91: GetTotalStakeOfPool introduced.
Replaces delegation-based logic with stake-based logic, aligning with the new effective stake approach.


127-129: GetValidatorPoolStake now returns the effective stake.
This method no longer relies on raw delegations, ensuring consistency with the new approach.


248-251: Clarification on slashing risk.
The comments accurately describe how effective stake equates to the actual amount at risk. No further issues noted.


307-307: Slash function signature updated.
Now references stakertypes.SlashType, aligning with the standardized slash enumeration.


315-327: Dynamic pool-specific slash fraction.
The approach multiplies the slash fraction by the real stake fraction in a pool. This is logically consistent with the new max voting power logic.


340-345: Emitting EventSlash with stakeFraction.
You now include the final stakeFraction used in the slash calculation, which is beneficial for debugging.


94-104: New IsVotingPowerTooHigh check.
This function enforces max voting power constraints at the pool level. Consider adding test coverage for borderline scenarios, e.g., just enough stakers to satisfy the constraint.

✅ Verification successful

Let me search for test coverage of this function to verify if borderline scenarios are tested.


I see multiple test cases using IsVotingPowerTooHigh but need to examine the actual test scenarios to verify if they cover borderline cases.


Test coverage for IsVotingPowerTooHigh is comprehensive.
The function has extensive test coverage including borderline scenarios:

  • Tests with max voting power at 0% and 100%
  • Tests with exactly enough validators to satisfy the constraint
  • Tests with fewer validators than required
  • Tests with zero delegation validators
  • Tests with stake fraction variations affecting voting power
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verifies coverage of borderline scenarios calling IsVotingPowerTooHigh across test files
ast-grep --pattern '
if (k.stakerKeeper.IsVotingPowerTooHigh($_, $_)) {
  $$$
}
'

Length of output: 83


Script:

#!/bin/bash
# Search for test files related to IsVotingPowerTooHigh
rg -l "IsVotingPowerTooHigh" --type go | grep "_test.go"

# Look for actual test implementations
rg "func.*Test.*VotingPower" --type go

# Check the implementation context
rg "IsVotingPowerTooHigh" -B 3 -A 3 --type go

Length of output: 11110


Script:

#!/bin/bash
# Let's look at the test file content to understand the test scenarios
cat x/stakers/keeper/keeper_suite_effective_stake_test.go

Length of output: 22185

x/bundles/keeper/logic_bundles.go (4)

6-7: Removed redundancy in imports.
Combining references to util and types clarifies the usage flow.


39-43: Ensure max voting power feasibility.
You now return a VotingPowerTooHigh error if the pool stakers can’t fulfill the constraint. This prevents attempts to run an unsatisfiable pool.


46-46: Comparing minDelegation with total stake.
Switching from raw delegation to total stake is consistent with the new stake-based model.


500-519: Direct usage of GetValidatorPoolStakes in vote distribution.
By aggregating from the staker keeper, you ensure that the result reflects the new capping logic. This avoids stale or partial stake data.

x/bundles/keeper/keeper_suite_invalid_bundles_test.go (36)

26-27: Added test coverage for stake fractions and foreign delegations in invalid bundles.
This clarifies how the system handles partial or capped stakes in negative voting scenarios.


67-70: Initialization of MaxVotingPowerPerPool in test suite.
Sets up the scenario where large stakes can be capped while testing invalid bundle submission.


83-91: Re-initializing balances after funding.
Ensures the test logic begins with known states. This is critical for verifying slash calculations.


97-98: Test scenario: multiple validators with no foreign delegations.
Comprehensively covers upload slash for an invalid bundle.


145-152: Balances re-queried prior to second bundle.
Ensures the test verifies the correct slash amounts from the first invalid bundle.


174-175: Second submission for the subsequent bundle.
Nicely triggers the logic that drops the bundle due to insufficient valid votes.


240-240: Confirmed total stake after slash.
line verifies the staker slash effects on pool stake. Good usage of GetTotalStakeOfPool.


276-295: Validators with foreign delegations scenario.
Ensures that stakers’ and delegators’ tokens are properly slashed. The test is thorough.


324-324: Wait for block time before next submission.
Simulates real chain progression.


340-345: Balance snapshotted for subsequent slash checks.
Prevents confusion about which bundle triggered which slash.


439-439: Post-slashed stake verification.
Ensuring the total stake equals 400*K after the slash indicates correct slash distribution.


475-494: Mix of valid and invalid votes test.
Demonstrates partial valid support but still fails quorum, ensuring correct slash logic.


540-541: Block time commits for second submission cycle.
Essential to replicate chain-based scheduling between uploads.


556-558: Capturing starting balances again.
Keeps slashing verifications consistent for the second invalid bundle.


560-561: Differentiating between staker and Valaddress.
Maintains clarity about which account is subject to slash or reward.


563-564: Minor detail: The snippet references the next lines. If line 564 is changed, confirm consistent referencing.


593-594: Second submission triggers slash for an invalid bundle.
This simulates the real scenario of partial support leading to an overall invalid conclusion.


671-671: Verifying total stake after multi-voter slash.
Ensures 450*K aligns with the aggregated slash logic.


716-727: New scenario: invalid bundle with partial stake fraction.
One validator invests 200*K but only half is at risk. This is crucial to test the capping logic.


739-740: Wait period ensures realistic timing.
Again verifying chain progression.


755-763: Third validator entry after the initial submission.
Clearly covers late joiners before the final invalid bundle.


765-773: Balance snapshots for slash verification.
Good practice to recheck final slashing logic with updated baseline.


774-805: Voting invalid on second submission.
Simulates real bundling with minimal duplication across stakers.


808-845: Proper checks for finalization or drop.
Ensures the bundle is dropped and staker is slashed accordingly.


855-860: Uploader slash respects actual risk.
Only half of the 200*K stake is subject to slash. Verified with final staker delegations.


894-910: Scenario: partial stake fraction plus foreign delegations.
Simultaneously tests delegated stake and capping logic.


922-923: Time-lapse ensures second proposal isn’t immediate.
Appropriately simulates real chain time.


938-946: Third validator entry with max fraction.
Mirrors the earlier approach to confirm consistent slash logic across all.


948-956: Balance snapshots for further slash checks.
Approach is consistent with prior tests.


958-972: Voters cast invalid votes.
Triggers second round logic. No issues found.


974-989: Second submission leads to drop.
All relevant stakers are slashed according to partial or full stake fraction.


1004-1022: No finalized bundle, verifying dropped state.
Ensures that the staker set is correctly pruned.


1023-1037: Uploader slash applies half stake fraction.
The new fraction-based approach is validated.


1038-1042: Accurate slash calculation for foreign delegators.
Ensures all 200*K is effectively accounted for, half from staker and half from delegator.


1043-1045: Pool stake remains consistent post-slashed.
No unexpected side effects observed.


1059-1077: Verifications for non-uploader stakers.
No slash triggered on valid or abstained stakers. Logic is correct.

x/bundles/keeper/keeper_suite_valid_bundles_test.go (2)

28-28: No issues with the updated test documentation reference.
This line simply clarifies that the test will check for correct behavior under a newly introduced maximum voting power constraint.


967-1166: Validate the new test logic and check for hidden pitfalls.
The newly introduced scenario tests a multi-validator setting where one validator exceeds the configured maximum voting power limit (40%) and votes “invalid,” triggering a partial slash. Key points to verify and consider:

• The reduction of slashable stake to match the maximum voting power threshold is correct and ensures that only the stake above the limit is excluded from the slashing calculation.
• The final total stake (pool stake) after applying partial slash appears consistent with the updated logic and the code snippet.
• The code comments (e.g., “// TODO: why is staker 2 selected as next uploader?”) suggest there might be confusion around the chosen next uploader. Consider clarifying the selection mechanism in code or in a comment.
• The test thoroughly checks final rewards distribution, partial slash amounts, delegation integrity, and next uploader assignment, ensuring alignment with the new constraints.

Overall, this test is well-structured. However, it is recommended to include edge-case coverage, such as verifying the boundary condition when exactly at the 40% limit, and verifying if rounding/truncation is handled correctly for fractional slash amounts.

x/stakers/types/expected_keepers.go (2)

4-4: Import statement organization is acceptable.
The additional import of "cosmossdk.io/math" is consistent with usage in the pool keeper’s interface for handling legacy decimal.


10-10: New PoolKeeper method is consistent with naming.
“GetMaxVotingPowerPerPool” method clearly communicates its purpose. Ensure that related calls throughout the codebase handle decimals consistently, particularly for boundary conditions or precision errors.

x/bundles/types/errors.go (1)

27-27: Appropriate sentinel error for zero delegation scenarios.
Introducing “ErrPoolHasZeroDelegation” with code 1209 clarifies error cases in which a pool has no active stakes. Be sure to propagate this error consistently in the relevant logic paths that rely on pool stake checks.

x/stakers/keeper/logic_stakers.go (1)

26-28: Verify stake calculation consistency across the codebase

The change from delegationAmount to stake aligns with the architectural shift from delegation-based to total stake calculations. However, we should verify this change is consistent across all pool management logic.

x/query/keeper/grpc_query_pool.go (1)

63-63: ⚠️ Potential issue

Breaking Change: Pool delegation calculation method updated

The switch from GetDelegationOfPool to GetTotalStakeOfPool might affect API clients that rely on the totalDelegation field in pool responses. This change is part of the architectural shift from delegation-based to stake-based calculations.

✅ Verification successful

The previous search failed due to unsupported file type. Let's try a broader search to understand the API impact and also explicitly search proto files to check for any breaking changes in the API definitions.


Based on the search results, I can now generate a final response. The code shows that GetTotalStakeOfPool is used consistently throughout the codebase for pool stake calculations, and the change from GetDelegationOfPool to GetTotalStakeOfPool is part of a broader architectural shift from delegation-based to stake-based calculations.

No breaking changes for API clients

The change from GetDelegationOfPool to GetTotalStakeOfPool in grpc_query_pool.go is an internal implementation detail that doesn't affect the API contract. The TotalDelegation field in the protobuf definitions remains unchanged, and clients will continue to receive the total stake value through the same field name. This is a compatible change that aligns the internal implementation with the architectural shift to stake-based calculations while maintaining API compatibility.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for API clients that might be affected by this change
rg -A 3 "TotalDelegation.*Pool|GetDelegationOfPool" --type proto --type go

Length of output: 106


Script:

#!/bin/bash
# Search for API clients and proto definitions related to pool delegation/stake
rg -A 3 "TotalDelegation|GetDelegationOfPool|GetTotalStakeOfPool" 

# Also check proto files specifically (without type filter)
find . -name "*.proto" -type f -exec grep -A 3 "TotalDelegation" {} \;

Length of output: 56293

x/bundles/types/expected_keepers.go (1)

44-46: Verify implementation of new StakerKeeper interface methods

The addition of new methods (GetTotalStakeOfPool, GetValidatorPoolStakes, IsVotingPowerTooHigh) and removal of delegation-related methods represents a significant change in stake management. These changes support the max pool voting power feature.

Key points to verify:

  1. All implementations of StakerKeeper interface are updated
  2. The IsVotingPowerTooHigh check is properly integrated into critical paths
✅ Verification successful

Let me generate additional scripts to verify the implementation completeness.


Let me try one more search to find the implementations using a different pattern.


Based on the gathered information, I can now provide the final response:

Implementation of StakerKeeper interface methods is complete and well-integrated

The new interface methods are fully implemented in x/stakers/keeper/exported_functions.go with proper integration:

  • GetTotalStakeOfPool calculates total stake by summing individual stakes
  • GetValidatorPoolStakes implements stake distribution with voting power considerations
  • IsVotingPowerTooHigh is properly used in critical paths:
    • Bundle processing (x/bundles/keeper/logic_bundles.go)
    • Pool status checks (x/query/keeper/helper.go)

The implementation is thoroughly tested as evidenced by multiple test cases in keeper_suite_effective_stake_test.go covering various voting power scenarios.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for StakerKeeper implementations
echo "Searching for StakerKeeper implementations..."
rg -A 5 "type.*Keeper.*struct" --type go

# Verify IsVotingPowerTooHigh usage in critical paths
echo "Checking IsVotingPowerTooHigh integration..."
rg -A 3 "IsVotingPowerTooHigh" --type go

Length of output: 8211


Script:

#!/bin/bash
# Search for the actual implementation of StakerKeeper interface methods
echo "Searching for GetTotalStakeOfPool implementation..."
ast-grep --pattern 'func (k Keeper) GetTotalStakeOfPool(ctx sdk.Context, poolId uint64) (totalStake uint64) {'

echo "Searching for GetValidatorPoolStakes implementation..."
ast-grep --pattern 'func (k Keeper) GetValidatorPoolStakes(ctx sdk.Context, poolId uint64, mustIncludeStakers ...string) map[string]uint64 {'

# Search for msg_server implementations that might need voting power checks
echo "Searching for msg server handlers..."
rg "func.*MsgServer.*" --type go -A 1 x/stakers/keeper/

Length of output: 782


Script:

#!/bin/bash
# Search for implementations in exported_functions.go which is the likely location
echo "Searching in exported_functions.go..."
rg "func \(k Keeper\) Get(Total|Validator)" x/stakers/keeper/exported_functions.go -A 10

# Check msg server implementations for stake-related operations
echo "Checking stake-related message handlers..."
rg "func \(m msgServer\)" x/stakers/keeper/msg_server.go -A 2

# Look for any stake-related hooks or callbacks
echo "Checking for stake-related hooks..."
rg "func \(k Keeper\) (Before|After)Stake" --type go

Length of output: 2795

x/query/keeper/helper.go (2)

43-43: LGTM: Consistent with new total stake approach

The change from GetDelegationOfPool to GetTotalStakeOfPool aligns with the new staking calculation approach.


79-82: Verify the order of pool status checks

The implementation looks good, but let's verify the priority order of these status checks. Should "voting power too high" be checked before "not enough delegation"?

✅ Verification successful

The search results provide a comprehensive view of the pool status definitions and their order of checks. Looking at the implementation in x/query/keeper/helper.go, we can see the complete order of status checks:

  1. POOL_STATUS_ACTIVE (default)
  2. POOL_STATUS_UPGRADING (if upgrade scheduled)
  3. POOL_STATUS_DISABLED (if disabled)
  4. POOL_STATUS_END_KEY_REACHED (if end key reached)
  5. POOL_STATUS_VOTING_POWER_TOO_HIGH (if voting power too high)
  6. POOL_STATUS_NOT_ENOUGH_DELEGATION (if below min delegation)
  7. POOL_STATUS_NO_FUNDS (if no active funding)

The order of status checks is correct and well-justified

The current order makes sense because:

  • "Voting power too high" is a critical security check that should halt the pool regardless of delegation amount
  • "Not enough delegation" is a secondary operational constraint that should be checked after security concerns
  • This matches the enum definition order in the protobuf (pool.pb.go) where VOTING_POWER_TOO_HIGH (6) has higher priority than NOT_ENOUGH_DELEGATION (4)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other pool status checks to verify the standard order
rg -A 10 "POOL_STATUS_" --type go

Length of output: 6041

x/bundles/keeper/logic_round_robin.go (1)

78-86: LGTM! Improved naming consistency.

The renaming from delegation to stake improves clarity and maintains consistency with the broader codebase changes.

testutil/integration/checks.go (2)

104-104: LGTM: Method name change accurately reflects the functionality.

The change from GetDelegationOfPool to GetTotalStakeOfPool better represents what's being retrieved and aligns with the new max pool voting power feature.


178-178: LGTM: Consistent method name change.

The change maintains consistency with the previous change and accurately reflects the verification being performed.

x/bundles/keeper/keeper_suite_stakers_leave_test.go (6)

143-143: LGTM: Proper verification of total stake after staker leaves.

The test correctly verifies that the total stake remains at 200 KYVE after a staker leaves while being the next uploader.


231-231: LGTM: Proper verification of total stake with uploader rewards.

The test correctly verifies that the total stake remains at 200 KYVE after a staker leaves while being entitled to uploader rewards.


347-347: LGTM: Proper verification of total stake with slashing.

The test correctly verifies that the total stake remains at 200 KYVE after a staker leaves while being slashed.


447-447: LGTM: Proper verification of total stake with voter slashing.

The test correctly verifies that the total stake remains at 200 KYVE after a voter leaves while being slashed.


539-539: LGTM: Proper verification of total stake with voter points.

The test correctly verifies that the total stake remains at 200 KYVE after a voter with points leaves.


634-634: LGTM: Proper verification of total stake with voter without max points.

The test correctly verifies that the total stake remains at 200 KYVE after a voter without max points leaves.

x/stakers/keeper/keeper_suite_effective_stake_test.go (1)

1-31: LGTM: Comprehensive test coverage for max pool voting power feature.

The test suite provides excellent coverage of various scenarios:

  • Validators below/above max pool voting power
  • Edge cases with zero delegation
  • Boundary conditions (0% and 100% max pool stake)
  • Stake fraction interactions
x/bundles/keeper/keeper_suite_zero_delegation_test.go (6)

294-294: LGTM: Consistent with the new total stake calculation approach

The change from GetDelegationOfPool to GetTotalStakeOfPool aligns with the new architecture for stake calculations.


332-335: LGTM: Improved test setup with direct state manipulation

Manually setting the next uploader simplifies the test setup by removing the dependency on MsgClaimUploaderRole transaction. This makes the tests more focused and reliable.


424-424: LGTM: Correct next uploader assertion

The assertion correctly verifies that the next uploader is set to STAKER_1, maintaining consistency with the test scenario.


510-513: Same pattern for manual next uploader setting

This follows the same pattern as the earlier change for manually setting the next uploader.


622-622: Same pattern for total stake calculation

This follows the same pattern as the earlier change from delegation to total stake calculation.


784-787: Same pattern for manual next uploader setting

This follows the same pattern as the earlier changes for manually setting the next uploader.

x/bundles/keeper/logic_bundles_test.go (2)

369-369: Same change in voting power validation

This follows the same pattern as the earlier change in voting power validation.


245-245: Verify the security implications of allowing high voting power

The change from expecting ErrVotingPowerTooHigh to expecting nil indicates that pools can now run with high voting power concentration. This could have security implications.

x/stakers/keeper/msg_server_join_pool_test.go (1)

74-77: Verify the implications of allowing 100% voting power

Setting MaxVotingPowerPerPool to 1 allows a single staker to have complete control over a pool. This is a significant change that requires careful consideration.

x/bundles/keeper/logic_end_block_handle_upload_timeout_test.go (2)

27-27: LGTM: Test case name updated to reflect new functionality.

The test case name has been appropriately updated from checking "50% voting power" to "validators with too much voting power", which better reflects the new implementation using configurable max voting power parameter.


Line range hint 284-313: Verify the max voting power parameter value.

The test sets max voting power to 20% but both validators have equal stakes of 100 KYVE each (50% each). This explains why their stakes are reduced to zero. However, consider adding test cases for:

  1. Validators just below the max voting power threshold
  2. Multiple validators with varying stake amounts
✅ Verification successful

Let me search for more context about the test cases and stake amounts.


Based on the test file analysis, I can now generate the final response:

The test case correctly verifies max voting power behavior at 20%.

The test file contains comprehensive test cases for various voting power scenarios, including:

  • Test case "Test effective stake with fewer validators than required to undercut the max pool voting power" specifically tests 20% max voting power with equal stakes (100 KYVE each), verifying that stakes are reduced to zero when validators can't meet the threshold
  • Other test cases verify different thresholds (0%, 35%, 50%, 100%) and scenarios with:
    • Validators below threshold
    • Single validator above threshold
    • Multiple validators above threshold
    • Varying stake amounts and stake fractions

The original test case is well-designed and consistent with the test suite's verification strategy.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other test cases that might be testing different voting power scenarios
rg -A 5 "MaxVotingPowerPerPool.*0.2" 

Length of output: 1639


Script:

#!/bin/bash
# Search for test cases in effective stake test file to understand the stake scenarios
cat x/stakers/keeper/keeper_suite_effective_stake_test.go

Length of output: 22185

x/stakers/keeper/msg_server_leave_pool_test.go (2)

46-48: Verify the implications of MaxVotingPowerPerPool parameter

Setting MaxVotingPowerPerPool to 1 (100%) in tests. This parameter caps the voting power per pool, which could affect staking behavior and validator influence.

✅ Verification successful

Let me gather more information about how this parameter is used in the test files to understand the implications better.


Setting MaxVotingPowerPerPool to 1 is appropriate for this test case

The verification reveals that:

  1. The default production value is 0.5 (50%) as defined in x/pool/types/params.go
  2. The parameter is properly validated to be between 0 and 1 through ValidatePercentage
  3. Various test files use different values (0.2, 0.35, 0.4, 0.5, 1.0) depending on their test scenarios
  4. Setting it to 1 (100%) in this test is intentional to test the maximum allowed voting power scenario

The test is correctly using a valid edge case value to verify the system's behavior at maximum voting power limit.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for other test files that might need this parameter update
# and verify the production configuration

# Search for MaxVotingPowerPerPool in test files
rg -l "MaxVotingPowerPerPool" --type go --glob "*_test.go"

# Check the default value in production code
rg "MaxVotingPowerPerPool" --type go --glob "!*_test.go"

Length of output: 2387


Script:

#!/bin/bash
# Check the context of MaxVotingPowerPerPool usage in test files
rg "MaxVotingPowerPerPool" --type go --glob "*_test.go" -B 2 -A 2

# Check the validation logic for this parameter
ast-grep --pattern 'func ValidatePercentage($_) error {
  $$$
}'

Length of output: 12771


93-93: Verify the consistency of stake calculation changes

The test assertions have been updated to use GetTotalStakeOfPool instead of GetDelegationOfPool. This change reflects a shift in how stake is calculated and should be consistently applied across the codebase.

Also applies to: 117-117, 166-166, 190-190, 271-271, 293-293

✅ Verification successful

Let me gather more information to verify the consistency of the stake calculation changes.


Based on the results, I can now provide the final response:

The change from GetDelegationOfPool to GetTotalStakeOfPool is correctly implemented

The codebase shows a consistent transition from GetDelegationOfPool to GetTotalStakeOfPool. The only remaining usage of GetDelegationOfPool is in x/delegation module's own implementation and one query endpoint, which is expected as it serves a different purpose. The new GetTotalStakeOfPool is properly integrated across all test files and is defined in the StakerKeeper interface, indicating a deliberate architectural change in stake calculations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify consistent usage of GetTotalStakeOfPool
# and ensure GetDelegationOfPool is fully replaced

# Check for any remaining usage of GetDelegationOfPool
rg "GetDelegationOfPool" --type go

# Check the implementation of GetTotalStakeOfPool
ast-grep --pattern 'func $_ GetTotalStakeOfPool($_, $_) $_'

Length of output: 457


Script:

#!/bin/bash
# Search for GetTotalStakeOfPool usage
rg "GetTotalStakeOfPool" --type go -A 2

# Search for any test files that might still use GetDelegationOfPool
rg "GetDelegationOfPool.*test" --type go

# Look for the implementation of GetTotalStakeOfPool
ast-grep --pattern 'type $_ interface {
  $$$
  GetTotalStakeOfPool($$$)
  $$$
}'

Length of output: 29755

x/stakers/keeper/msg_server_update_stake_fraction_test.go (1)

53-55: LGTM: Consistent test updates

The changes in this file mirror those in msg_server_leave_pool_test.go:

  1. Setting MaxVotingPowerPerPool to 1
  2. Updating stake calculation assertions to use GetTotalStakeOfPool

The changes are consistent and maintain test coverage for the stake fraction update functionality.

Also applies to: 78-78, 94-94, 118-118, 134-134, 143-143, 159-159, 175-175, 191-191, 207-207, 230-230, 239-239, 262-262, 271-271, 285-285, 300-300, 309-309, 351-351, 356-356, 365-365, 370-370, 378-378, 387-387, 395-395, 409-409

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
x/stakers/keeper/exported_functions.go (4)

30-33: Consider handling event emission errors

The event emission error is silently discarded. While this might be intentional, consider logging the error for debugging purposes.

-	_ = ctx.EventManager().EmitTypedEvent(&stakertypes.EventLeavePool{
+	if err := ctx.EventManager().EmitTypedEvent(&stakertypes.EventLeavePool{
		PoolId: poolId,
		Staker: staker,
-	})
+	}); err != nil {
+		ctx.Logger().Error("failed to emit LeavePool event", "error", err)
+	}

84-91: Add time complexity documentation

The function iterates through all validators to sum their stakes. Consider documenting the time complexity O(n) where n is the number of validators.

+// GetTotalStakeOfPool returns the amount in uykve which actively secures
+// the given pool.
+// Time complexity: O(n) where n is the number of validators in the pool.
 func (k Keeper) GetTotalStakeOfPool(ctx sdk.Context, poolId uint64) (totalStake uint64) {

196-198: Use stable sort with explicit stability requirement

The current implementation uses sort.SliceStable but doesn't explicitly require stability. Consider using regular sort.Slice if stability isn't required for better performance.

-	sort.SliceStable(validators, func(i, j int) bool {
+	sort.Slice(validators, func(i, j int) bool {
 		return validators[i].Stake > validators[j].Stake
 	})

132-147: Consider extracting stake calculation logic

The stake calculation logic in GetValidatorPoolStakes is complex and handles multiple responsibilities. Consider extracting the redistribution and scaling logic into separate functions for better maintainability and testability.

Example structure:

func (k Keeper) GetValidatorPoolStakes(ctx sdk.Context, poolId uint64, mustIncludeStakers ...string) map[string]uint64 {
    stakes := k.calculateInitialStakes(ctx, poolId, mustIncludeStakers...)
    if !k.needsRedistribution(ctx, stakes) {
        return stakes
    }
    stakes = k.redistributeExcessStakes(ctx, stakes)
    return k.scaleStakes(ctx, stakes)
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9037988 and 6976d20.

📒 Files selected for processing (2)
  • x/stakers/keeper/exported_functions.go (8 hunks)
  • x/stakers/keeper/keeper_suite_effective_stake_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • x/stakers/keeper/keeper_suite_effective_stake_test.go

x/stakers/keeper/exported_functions.go Show resolved Hide resolved
x/stakers/keeper/exported_functions.go Show resolved Hide resolved
x/stakers/keeper/exported_functions.go Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
x/stakers/keeper/exported_functions.go (2)

94-104: LGTM! Consider adding documentation about the voting power calculation.

The function correctly implements the voting power check. Consider adding a comment explaining the mathematical relationship between the number of validators and the maximum voting power threshold.

 // IsVotingPowerTooHigh returns whether there are enough validators in a pool
 // to successfully stay below the max voting power
+// The function uses the relationship: 1/maxVotingPower > numberOfValidators
+// to determine if it's mathematically impossible to distribute voting power
+// without exceeding the maximum threshold.
 func (k Keeper) IsVotingPowerTooHigh(ctx sdk.Context, poolId uint64) bool {

127-270: LGTM! Consider caching frequently accessed values.

The implementation is thorough and correctly handles voting power distribution. The algorithm for stake redistribution and scaling is well-documented and mathematically sound.

Consider caching maxVotingPower and validator.GetBondedTokens() values that are accessed multiple times to improve performance.

 func (k Keeper) GetValidatorPoolStakes(ctx sdk.Context, poolId uint64, mustIncludeStakers ...string) map[string]uint64 {
     // ... existing code ...
+    // Cache maxVotingPower to avoid multiple keeper calls
+    maxVotingPower := k.poolKeeper.GetMaxVotingPowerPerPool(ctx)
     
     for _, address := range addresses {
         validator, _ := k.GetValidator(ctx, address)
+        // Cache bonded tokens to avoid multiple calls
+        bondedTokens := validator.GetBondedTokens()
         valaccount, _ := k.GetValaccount(ctx, poolId, address)
 
-        stake := uint64(valaccount.StakeFraction.MulInt(validator.GetBondedTokens()).TruncateInt64())
+        stake := uint64(valaccount.StakeFraction.MulInt(bondedTokens).TruncateInt64())
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6976d20 and 0c53e7a.

📒 Files selected for processing (1)
  • x/stakers/keeper/exported_functions.go (8 hunks)
🧰 Additional context used
📓 Learnings (1)
x/stakers/keeper/exported_functions.go (1)
Learnt from: troykessler
PR: KYVENetwork/chain#211
File: x/stakers/keeper/exported_functions.go:334-342
Timestamp: 2024-12-20T08:52:36.443Z
Learning: The `GetValidatorPoolStakes()` method ensures that the effective stake cannot exceed the validator's total bonded tokens, so no additional check is necessary.
🔇 Additional comments (3)
x/stakers/keeper/exported_functions.go (3)

17-17: LGTM! Import and event type changes are consistent.

The renaming of the import from types to stakertypes and the corresponding event type update maintain consistency throughout the codebase.

Also applies to: 30-33


84-91: LGTM! Efficient implementation of total stake calculation.

The function correctly calculates the total stake by summing up the effective stakes of all validators in the pool.


Line range hint 326-366: LGTM! Stake fraction calculation is correct.

The implementation correctly calculates the stake fraction based on effective stake, as confirmed by the learnings that GetValidatorPoolStake ensures the effective stake cannot exceed the validator's bonded tokens.

@mbreithecker mbreithecker merged commit 967b1d4 into main Dec 20, 2024
4 checks passed
@mbreithecker mbreithecker deleted the troy/max-pool-stake branch December 20, 2024 09:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants