Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Upgrades] v0.0.12 upgrade #1043

Merged
merged 10 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions app/upgrades/historical.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@
// For example, even if `ConsensusVersion` is not modified for any modules, it still might be beneficial to create
// an upgrade so node runners are signaled to start utilizing `Cosmovisor` for new binaries.
var UpgradeExample = Upgrade{
// PlanName can be any string. This code is executed when the upgrade with this plan name is submitted to the network.
// It is not necessarly should be a version, but it's usually the case as we introduce breaking changes and force
// the community to upgrade.
// PlanName can be any string.
// This code is executed when the upgrade with this plan name is submitted to the network.
// This does not necessarly need to be a version, but it's usually the case with consensus-breaking changes.

Check warning on line 45 in app/upgrades/historical.go

View workflow job for this annotation

GitHub Actions / misspell

[misspell] app/upgrades/historical.go#L45

"necessarly" is a misspelling of "necessary"
Raw output
./app/upgrades/historical.go:45:18: "necessarly" is a misspelling of "necessary"
okdas marked this conversation as resolved.
Show resolved Hide resolved
PlanName: "v0.0.0-Example",
CreateUpgradeHandler: defaultUpgradeHandler,

Expand Down
4 changes: 2 additions & 2 deletions app/upgrades/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ const (
// The default PNF/DAO address in the genesis file for Alpha TestNet. Used to create new authz authorizations.
AlphaTestNetPnfAddress = "pokt1r6ja6rz6rpae58njfrsgs5n5sp3r36r2q9j04h"

// Authority address. Defaults to x/gov module account address. Used to create new authz authorizations.
// TECHDEBT: DO NOT use AlphaTestNetAuthorityAddress.
// This is the authority address used to create new authz authorizations. Defaults to x/gov module account address.
// Use `keepers.UpgradeKeeper.Authority(ctx, &upgradetypes.QueryAuthorityRequest{})` to query the authority address of the current Alpha Network.
// This hard-coded address is kept for record keeping historical purposes.
// NOTE: This hard-coded address is kept for record-keeping historical purposes.
AlphaTestNetAuthorityAddress = "pokt10d07y265gmmuvt4z0w9aw880jnsr700j8yv32t"

// The default PNF/DAO address in the genesis file for Beta TestNet. Used to create new authz authorizations.
Olshansk marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
91 changes: 68 additions & 23 deletions app/upgrades/v0.0.12.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,26 @@ import (

const Upgrade_0_0_12_PlanName = "v0.0.12"

// Upgrade_0_0_12 handles the upgrade to release `v0.0.12`. To be issued on both Alpha and Beta TestNets.
// Upgrade_0_0_12 handles the upgrade to release `v0.0.12`.
// This is planned to be issued on both Pocket Network's Shannon Alpha & Beta TestNets.
var Upgrade_0_0_12 = Upgrade{
PlanName: Upgrade_0_0_12_PlanName,
CreateUpgradeHandler: func(mm *module.Manager,
keepers *keepers.Keepers,
configurator module.Configurator,
) upgradetypes.UpgradeHandler {
// Parameter configurations aligned with repository config.yml specifications
// These values reflect the delta between v0.0.11 and current main branch
// Parameter configurations aligned with repository config.yml specifications.
// These values reflect the delta between v0.0.11 and the main branch as of #1043.
// Reference:
// - Comparison: https://github.com/pokt-network/poktroll/compare/v0.0.11..main
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be using main or a specific sha (that equals to main) when you're preparing this PR?

That way this will be "evergreen"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TLDR: I want to use main here. This link is for the purpose of writing an upgrade. Once we go through the upgrade - we no longer need this code, perhaps only as a reference.

Complex answer, feel free to skip:

We're supposed to create a release and use artifacts (binaries & images) produced by that release to upgrade validators, full nodes, seeds, and relayminers.

The release can be created off any branch or any commit. It MUST include the upgrade plan code (introduced by this PR).

Now, the code in the main branch can drift between the moment of writing this upgrade code and creating a new release. So if we want to make things "right" - we need to have more branches, perhaps dedicated version branches. Do you remember we talked about how cosmos-sdk branches work? We also talked about over-engineering? If the codebase is active enough, we'd have no other choice but to have a similar complex branching system. For example, if there's a feature in the main branch we DON'T want to be included in the release - there's just no other way I can think of.

But we are a small team, and we can track that drift to changes to upgrade, if necessary, before the release. So I want to compare to main.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving conversation to discord to get this over the finish line.

https://discord.com/channels/824324475256438814/1332065342746525778/1336434350660784239

// - Direct diff: `git diff v0.0.11..main -- config.yml`
//
// Note: Parameter updates are derived from config.yml modifications, which serves
// as the source of truth for all parameter changes.
// DEV_NOTE: These parameter updates are derived from config.yml in the root directory
// of this repository, which serves as the source of truth for all parameter changes.
const (
supplierStakingFee = 1000000
serviceTargetNumRelays = 100
tokenomicsGlobalInflationPerClaim = 0.1
supplierStakingFee = 1000000 // uPOKT
serviceTargetNumRelays = 100 // num relays
tokenomicsGlobalInflationPerClaim = 0.1 // % of the claim amount
)

applyNewParameters := func(ctx context.Context) (err error) {
Expand Down Expand Up @@ -80,36 +81,76 @@ var Upgrade_0_0_12 = Upgrade{
return nil
}

// Helper function to update all suppliers' RevShare to 100%. This is necessary to ensure that
// we have that value populated before suppliers are connected.
// Helper function to update all suppliers' RevShare to 100%.
// This is necessary to ensure that we have that value populated before suppliers are connected.
//
updateSuppliersRevShare := func(ctx context.Context) error {
logger := cosmosTypes.UnwrapSDKContext(ctx).Logger()
suppliers := keepers.SupplierKeeper.GetAllSuppliers(ctx)
logger.Info("Updating all suppliers to have a 100% revshare to the supplier",
logger.Info("Updating (overriding) all suppliers to delegate 100% revenue share to the supplier's operator address",
"num_suppliers", len(suppliers))

for _, supplier := range suppliers {
for _, service := range supplier.Services {
// Add warning log if we're overwriting existing revshare settings. We can't get the previous
// revshare percentage due to a protobuf change, but we can at least log the number of previous revshares.
if len(service.RevShare) > 1 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need to check if the address is different

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the logic a bit.

If more than 1 supplier: log and overwrite,
if 1 supplier: just update the data keeping existing revshare address (assuming 100% goes to that one address)

// Warning: Overwriting existing revshare settings without preserving history.
okdas marked this conversation as resolved.
Show resolved Hide resolved
// Note: While the canonical approach would be using Module Upgrade (docs.cosmos.network/v0.46/building-modules/upgrade)
okdas marked this conversation as resolved.
Show resolved Hide resolved
// to handle protobuf type changes (see: github.com/cosmos/cosmos-sdk/blob/v0.46.0-rc1/x/bank/migrations/v043/store.go#L50-L71),
// we've opted for direct overwrite because:
// 1. No active revenue shares are impacted at time of writing
// 2. Additional protobuf and repo structure changes would be required for proper (though unnecessary) migration

// Create a string representation of just the revenue share addresses
revShareAddresses := "["
for i, rs := range service.RevShare {
if i > 0 {
revShareAddresses += ","
}
revShareAddresses += rs.Address
}
revShareAddresses += "]"
okdas marked this conversation as resolved.
Show resolved Hide resolved
logger.Warn(
"Overwriting existing revenue share configuration",
Olshansk marked this conversation as resolved.
Show resolved Hide resolved
"supplier", supplier.OperatorAddress,
"supplier_operator", supplier.OperatorAddress,
"supplier_owner", supplier.OwnerAddress,
"service", service.ServiceId,
"previous_revshare_count", len(service.RevShare),
Olshansk marked this conversation as resolved.
Show resolved Hide resolved
"previous_revshare_addresses", revShareAddresses,
)
service.RevShare = []*sharedtypes.ServiceRevenueShare{
{
Address: supplier.OperatorAddress,
RevSharePercentage: uint64(100),
},
}
} else if len(service.RevShare) == 1 {
// If there is only one revshare setting, we can safely overwrite it (because it has 100%
// revenue share), keeping the existing address.
logger.Info("Updating supplier's revenue share configuration",
"supplier_operator", supplier.OperatorAddress,
"supplier_owner", supplier.OwnerAddress,
"service", service.ServiceId,
"previous_revshare_address", service.RevShare[0].Address,
)
currentRevShare := service.RevShare[0]
service.RevShare = []*sharedtypes.ServiceRevenueShare{
{
Address: currentRevShare.Address,
RevSharePercentage: uint64(100),
},
}
} else {
logger.Warn("That shouldn't happen: no revenue share configuration found for supplier",
"supplier_operator", supplier.OperatorAddress,
"supplier_owner", supplier.OwnerAddress,
"service", service.ServiceId,
)
}
service.RevShare = []*sharedtypes.ServiceRevenueShare{
{
Address: supplier.OperatorAddress,
RevSharePercentage: uint64(100),
},
}
}
keepers.SupplierKeeper.SetSupplier(ctx, supplier)
logger.Info("Updated supplier",
"supplier", supplier.OperatorAddress)
"supplier_operator", supplier.OperatorAddress,
"supplier_owner", supplier.OwnerAddress)
}
return nil
}
Expand All @@ -119,6 +160,9 @@ var Upgrade_0_0_12 = Upgrade{
logger := cosmosTypes.UnwrapSDKContext(ctx).Logger()
logger.Info("Starting upgrade handler", "upgrade_plan_name", Upgrade_0_0_12_PlanName)

logger.Info("Starting parameter updates section", "upgrade_plan_name", Upgrade_0_0_12_PlanName)
// Update all governance parameter changes.
// This includes adding params, removing params and changing values of existing params.
err := applyNewParameters(ctx)
okdas marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
logger.Error("Failed to apply new parameters",
Expand All @@ -127,7 +171,8 @@ var Upgrade_0_0_12 = Upgrade{
return vm, err
}

// Update all suppliers' RevShare
logger.Info("Starting supplier RevShare updates section", "upgrade_plan_name", Upgrade_0_0_12_PlanName)
// Override all suppliers' RevShare to be 100% delegate to the supplier's operator address
err = updateSuppliersRevShare(ctx)
if err != nil {
logger.Error("Failed to update suppliers RevShare",
Expand All @@ -136,7 +181,7 @@ var Upgrade_0_0_12 = Upgrade{
return vm, err
}

logger.Info("Running module migrations", "upgrade_plan_name", Upgrade_0_0_12_PlanName)
logger.Info("Starting module migrations section", "upgrade_plan_name", Upgrade_0_0_12_PlanName)
vm, err = mm.RunMigrations(ctx, configurator, vm)
if err != nil {
logger.Error("Failed to run migrations",
Expand Down