-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use append only on balances #655
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request introduces several enhancements to the Ledger system, focusing on volume tracking, account metadata, and database operations. Key changes include adding a new method Changes
Suggested Labels
Suggested Reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #655 +/- ##
==========================================
- Coverage 81.71% 81.07% -0.64%
==========================================
Files 131 131
Lines 7059 7323 +264
==========================================
+ Hits 5768 5937 +169
- Misses 990 1077 +87
- Partials 301 309 +8 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (10)
internal/ledger_test.go (1)
12-12
: Ensure deterministic ordering in feature set string representation to prevent flaky tests.The test assertion on line 12 compares the string representation of the feature set. Since map iteration is not ordered in Go, the output string may vary, causing the test to fail intermittently. Consider sorting the features before generating the string representation to make the test result consistent.
Apply this diff to sort features in the
String()
method:func (f FeatureSet) String() string { - var features []string + var featuresList []string for feature, state := range f { - features = append(features, fmt.Sprintf("%s=%s", feature.Short(), state)) + featuresList = append(featuresList, fmt.Sprintf("%s=%s", feature.Short(), state)) } - return strings.Join(features, ",") + sort.Strings(featuresList) + return strings.Join(featuresList, ",") }internal/storage/ledger/volumes.go (1)
36-39
: Remove redundant error return.The error is already being resolved and returned at line 36. The additional return at line 39 is unnecessary.
Apply this diff to remove the redundant error return:
if err != nil { return postgres.ResolveError(err) } -return err +return nilinternal/posting.go (2)
67-69
: Optimize map initialization and add early return.Consider optimizing the map initialization and adding an early return for empty postings.
Apply this diff to optimize the function:
func (p Postings) InvolvedVolumes() map[string][]string { + if len(p) == 0 { + return map[string][]string{} + } - ret := make(map[string][]string) + ret := make(map[string][]string, len(p)*2) // Pre-allocate for source and destination
76-77
: Consider deduplicating assets during insertion.For large datasets, it might be more efficient to deduplicate assets during insertion rather than at the end.
Here's an alternative implementation using a map for O(1) lookups:
-ret[posting.Source] = append(ret[posting.Source], posting.Asset) -ret[posting.Destination] = append(ret[posting.Destination], posting.Asset) +// Use map for O(1) lookups during insertion +if _, exists := ret[posting.Source]; !exists { + ret[posting.Source] = make(map[string]struct{}) +} +ret[posting.Source][posting.Asset] = struct{}{} + +if _, exists := ret[posting.Destination]; !exists { + ret[posting.Destination] = make(map[string]struct{}) +} +ret[posting.Destination][posting.Asset] = struct{}{}Then convert maps to sorted slices at the end:
result := make(map[string][]string, len(ret)) for account, assets := range ret { slice := make([]string, 0, len(assets)) for asset := range assets { slice = append(slice, asset) } slices.Sort(slice) result[account] = slice } return resultinternal/storage/ledger/balances.go (2)
19-37
: LGTM! Consider using a more robust key generation method.The implementation correctly prevents deadlocks by sorting lock keys. However, consider using a more robust key generation method to avoid potential collisions.
- lockKeys = append(lockKeys, fmt.Sprintf("%s-%s-%s", store.ledger.Name, account, asset)) + lockKeys = append(lockKeys, fmt.Sprintf("ledger:%s:account:%s:asset:%s", store.ledger.Name, account, asset))
75-86
: LGTM! Consider adding an index for performance optimization.The implementation correctly handles volume locking and uses efficient aggregate functions. However, consider adding an index to optimize the grouped query.
Add the following index to improve query performance:
CREATE INDEX idx_accounts_volumes_ledger_address_asset ON accounts_volumes (ledger, accounts_address, asset);internal/storage/ledger/resource_accounts.go (1)
110-111
: LGTM! Consider adding indexes for aggregation performance.The implementation correctly uses aggregate functions for balance and volume calculations. However, consider adding indexes to optimize the grouped queries.
Add the following indexes to improve aggregation performance:
CREATE INDEX idx_accounts_volumes_group ON accounts_volumes (ledger, accounts_address, asset, input, output); CREATE INDEX idx_moves_group ON moves (ledger, accounts_address, asset, effective_date);Also applies to: 173-174
internal/storage/ledger/accounts_test.go (2)
154-160
: Added debugging support for test failures.The addition of table and query dumps will help diagnose test failures. However, consider using a test helper function to encapsulate this debugging logic.
+func dumpDebugInfo(t *testing.T, store *Store, ctx context.Context) { + t.Helper() + store.DumpTables(ctx, "transactions") + store.DumpQuery(ctx, store.GetDB().NewSelect(). + TableExpr(store.GetPrefixedRelationName("moves")). + Where("ledger = ?", store.GetLedger().Name). + Where("accounts_address = 'account:1'"). + Order("seq desc")) +}
281-284
: Consider using test helper for debugging.Similar debugging code appears in multiple places. Consider using a shared helper function.
internal/controller/ledger/controller_default.go (1)
459-465
: Consider using a single timestamp for all fields.The code sets the same timestamp (
now
) forInsertionDate
,UpdatedAt
, andFirstUsage
. Consider:
- Using different timestamps based on the actual first usage
- Making the timestamp fields more explicit about their purpose
- now := time.Now() - if err := store.UpsertAccounts(ctx, &ledger.Account{ - Address: parameters.Input.Address, - Metadata: metadata, - InsertionDate: now, - UpdatedAt: now, - FirstUsage: now, + currentTime := time.Now() + if err := store.UpsertAccounts(ctx, &ledger.Account{ + Address: parameters.Input.Address, + Metadata: metadata, + InsertionDate: currentTime, // When the account is first created + UpdatedAt: currentTime, // When the metadata was last modified + FirstUsage: currentTime, // When the account was first used in a transaction
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (6)
.github/workflows/benchmark-comparison.yml
is excluded by!**/*.yml
go.mod
is excluded by!**/*.mod
go.sum
is excluded by!**/*.sum
,!**/*.sum
internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml
is excluded by!**/*.yaml
tools/generator/go.mod
is excluded by!**/*.mod
tools/generator/go.sum
is excluded by!**/*.sum
,!**/*.sum
📒 Files selected for processing (23)
internal/README.md
(99 hunks)internal/api/v2/controllers_balances.go
(2 hunks)internal/controller/ledger/controller_default.go
(2 hunks)internal/ledger_test.go
(1 hunks)internal/posting.go
(2 hunks)internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
(1 hunks)internal/storage/ledger/accounts.go
(2 hunks)internal/storage/ledger/accounts_test.go
(9 hunks)internal/storage/ledger/balances.go
(3 hunks)internal/storage/ledger/balances_test.go
(2 hunks)internal/storage/ledger/debug.go
(1 hunks)internal/storage/ledger/main_test.go
(1 hunks)internal/storage/ledger/moves_test.go
(2 hunks)internal/storage/ledger/resource_accounts.go
(2 hunks)internal/storage/ledger/resource_volumes.go
(2 hunks)internal/storage/ledger/transactions.go
(3 hunks)internal/storage/ledger/transactions_test.go
(2 hunks)internal/storage/ledger/volumes.go
(2 hunks)internal/storage/ledger/volumes_test.go
(2 hunks)pkg/features/features.go
(4 hunks)test/e2e/api_accounts_metadata_test.go
(1 hunks)test/e2e/app_lifecycle_test.go
(1 hunks)test/stress/stress_test.go
(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- test/e2e/api_accounts_metadata_test.go
- test/stress/stress_test.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Tests
- GitHub Check: BenchmarkCompare
🔇 Additional comments (16)
internal/storage/ledger/debug.go (1)
17-18
: LGTM! Good security improvement.Adding the ledger filter prevents potential data leakage across different ledgers during debugging. This is a good defensive programming practice.
Let's verify that this change doesn't break existing debug calls:
✅ Verification successful
Verified: The ledger filter addition is safe and consistent
The change aligns with existing patterns in the codebase where debug queries are already ledger-specific. All current usages in tests will continue to work as expected.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for existing DumpTables calls that might be affected # Look for test files using DumpTables rg -A 3 "DumpTables" --type go # Look for potential debug statements that might rely on cross-ledger data ast-grep --pattern 'DumpTables($$$)'Length of output: 4900
internal/storage/ledger/accounts.go (2)
105-115
: Verify the correctness of struct tags in theAccount
struct.The
Account
struct (lines 105-115) usesbun
ORM struct tags. Please ensure that the struct tags align with the database schema andbun
ORM expectations. Specifically, verify thedefault:'{}'
syntax in theMetadata
field (line 109) for correctness.
132-134
:⚠️ Potential issueEnsure safe construction of SQL query conditions to prevent SQL injection.
In the loop (lines 132-134), SQL query conditions are constructed by concatenating strings. While the arguments are passed separately, it's important to ensure that all placeholders
?
in the condition string are correctly mapped to the corresponding arguments inargsForUpdate
. Misalignment can lead to SQL injection vulnerabilities or runtime errors.[security]
internal/storage/ledger/transactions.go (1)
137-138
: Verify the correctness of volume adjustments inpostCommitVolumes
.When adjusting
postCommitVolumes
in lines 137-138, you're negatingposting.Amount
before passing it toAddInput
andAddOutput
. Ensure that negating the amount aligns with the expected logic of volume adjustments in your ledger system to avoid incorrect balances.internal/storage/ledger/main_test.go (1)
46-46
: LGTM!Always adding the query hook in tests ensures consistent debugging capabilities across all test runs.
internal/storage/ledger/moves_test.go (1)
40-44
: LGTM! Account initialization properly includes new timestamp fields.The Account struct initialization correctly includes all required fields with appropriate timestamp values.
internal/storage/ledger/resource_volumes.go (2)
64-67
: Improved query performance with aggregations and proper grouping.The changes optimize the query by:
- Using
sum
aggregation for input/output calculations- Adding proper grouping by accounts_address and asset
Also applies to: 71-71
83-83
: Enhanced consistency in array handling.Using
array_agg
with[1]
index selection ensures consistent results when retrieving account arrays and metadata.Also applies to: 87-87
test/e2e/app_lifecycle_test.go (1)
113-113
: Improved transaction state monitoring using advisory locks.The change from monitoring INSERT operations to advisory locks provides a more reliable way to track transaction states, as advisory locks are explicitly managed and less prone to false positives.
internal/storage/ledger/balances_test.go (2)
32-48
: Enhanced test setup with proper account initialization.The changes improve test reliability by properly initializing the world account with timestamps and volumes.
62-78
: Consistent test setup across test cases.The same initialization pattern is applied to the concurrent access test, ensuring consistency across test cases.
internal/storage/ledger/accounts_test.go (2)
477-484
: Improved account initialization with consistent timestamps.The changes ensure that accounts are initialized with consistent timestamps, improving test reliability.
Also applies to: 488-492
508-515
: Enhanced account upsert test cases.The test now properly verifies:
- Upsert behavior with future timestamps
- Retrieval of existing account data
- Modification of account fields
Also applies to: 519-550
internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1)
1-2
:⚠️ Potential issueVerify data integrity after primary key removal.
Dropping the primary key constraint from
accounts_volumes
is a significant change. Ensure that:
- No duplicate rows can be inserted
- Row uniqueness is maintained through application logic
- Performance impact on write operations is acceptable
Run this script to check for any existing duplicate rows that might cause issues:
✅ Verification successful
Primary key removal appears intentional - uniqueness no longer required
The migration sequence shows this is a planned change:
- Migration Bump machine to latest version #25 first created the replacement index concurrently
- Migration Improve exec error display #28 removes PK and creates final index structure
- The new index maintains query performance by covering key columns
The removal of uniqueness constraints appears to be an intentional design decision, not a data integrity concern.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for potential duplicate rows in accounts_volumes rg -A 5 'accounts_volumes.*insert|accounts_volumes.*update'Length of output: 61
Script:
#!/bin/bash # Find table definition and related files echo "=== Table Definition ===" fd -e sql accounts_volumes echo -e "\n=== Table References ===" rg -l "accounts_volumes" echo -e "\n=== Entity Definitions ===" rg -l "AccountsVolume|AccountVolume"Length of output: 1364
Script:
#!/bin/bash echo "=== Migration #28 (Current) ===" cat internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql echo -e "\n=== Migration #25 (Previous Index) ===" cat internal/storage/bucket/migrations/25-accounts-volumes-index/up.sql echo -e "\n=== Entity Definition ===" rg -A 10 "type AccountsVolume struct|type AccountVolume struct"Length of output: 922
internal/controller/ledger/controller_default.go (1)
454-457
: LGTM! Defensive initialization of metadata.Good practice to initialize metadata as an empty map if nil.
internal/README.md (1)
748-754
: Documentation forInvolvedVolumes()
could be more descriptive.While the method signature is clear, adding a description of the method's purpose and return value format would improve the documentation. Consider documenting:
- The purpose of mapping accounts to their involved assets
- The format and structure of the returned map
- Example usage scenarios
Let's verify if other similar methods in the codebase have descriptive documentation:
accounts := Map(accounts, func(from *ledger.Account) Account { | ||
return Account{ | ||
Address: from.Address, | ||
Metadata: from.Metadata, | ||
FirstUsage: from.FirstUsage.Time, | ||
InsertionDate: from.InsertionDate.Time, | ||
UpdatedAt: from.UpdatedAt.Time, | ||
Volumes: from.Volumes, | ||
EffectiveVolumes: from.EffectiveVolumes, | ||
} | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remove .Time
access from time fields in account mapping
The time fields in the source ledger.Account
struct are already of type time.Time
. Accessing .Time
on these fields will cause compilation errors. Apply the suggested changes:
- FirstUsage: from.FirstUsage.Time,
- InsertionDate: from.InsertionDate.Time,
- UpdatedAt: from.UpdatedAt.Time,
+ FirstUsage: from.FirstUsage,
+ InsertionDate: from.InsertionDate,
+ UpdatedAt: from.UpdatedAt,
🔗 Analysis chain
Confirm the proper handling of time fields in account mapping.
In the mapping function (lines 117-127), the code accesses the .Time
field from from.FirstUsage
, from.InsertionDate
, and from.UpdatedAt
. If these fields are already of type time.Time
, accessing .Time
may cause a compilation error. Please verify that from.FirstUsage
, from.InsertionDate
, and from.UpdatedAt
are of a type that contains a Time
field (e.g., pgtype.Timestamp
or a custom type with a Time
field).
Apply this diff if the fields are of type time.Time
:
- FirstUsage: from.FirstUsage.Time,
- InsertionDate: from.InsertionDate.Time,
- UpdatedAt: from.UpdatedAt.Time,
+ FirstUsage: from.FirstUsage,
+ InsertionDate: from.InsertionDate,
+ UpdatedAt: from.UpdatedAt,
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for the Account struct definition in the ledger package
ast-grep --pattern 'type Account struct {
$$$
}'
# Backup search using ripgrep in case the struct is defined differently
rg "type Account struct" -A 15
Length of output: 7060
var postCommitVolumes ledger.PostCommitVolumes | ||
if tx.PostCommitVolumes != nil { | ||
postCommitVolumes = tx.PostCommitVolumes.Copy() | ||
} | ||
dstMove := &ledger.Move{ | ||
Account: posting.Destination, | ||
Amount: (*bunpaginate.BigInt)(posting.Amount), | ||
Asset: posting.Asset, | ||
InsertionDate: tx.InsertedAt, | ||
EffectiveDate: tx.Timestamp, | ||
TransactionID: tx.ID, | ||
} | ||
if postCommitVolumes != nil { | ||
dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy()) | ||
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) | ||
} | ||
moves = append(moves, dstMove) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prevent potential panic due to missing keys when accessing postCommitVolumes
.
In the loop (lines 123-139), the code accesses postCommitVolumes[posting.Destination][posting.Asset]
without checking if the keys exist. If a key is missing, this will cause a runtime panic. Please add checks to ensure that the nested map entries exist before accessing them or initialize them if necessary.
Apply this diff to safely access postCommitVolumes
:
if tx.PostCommitVolumes != nil {
- dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
- postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
+ if assets, ok := postCommitVolumes[posting.Destination]; ok {
+ if volume, ok := assets[posting.Asset]; ok {
+ dstMove.PostCommitVolumes = pointer.For(volume.Copy())
+ }
+ }
+ postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
}
Similarly, update the source move to prevent panics:
if tx.PostCommitVolumes != nil {
- srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy())
- postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
+ if assets, ok := postCommitVolumes[posting.Source]; ok {
+ if volume, ok := assets[posting.Asset]; ok {
+ srcMove.PostCommitVolumes = pointer.For(volume.Copy())
+ }
+ }
+ postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
var postCommitVolumes ledger.PostCommitVolumes | |
if tx.PostCommitVolumes != nil { | |
postCommitVolumes = tx.PostCommitVolumes.Copy() | |
} | |
dstMove := &ledger.Move{ | |
Account: posting.Destination, | |
Amount: (*bunpaginate.BigInt)(posting.Amount), | |
Asset: posting.Asset, | |
InsertionDate: tx.InsertedAt, | |
EffectiveDate: tx.Timestamp, | |
TransactionID: tx.ID, | |
} | |
if postCommitVolumes != nil { | |
dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy()) | |
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) | |
} | |
moves = append(moves, dstMove) | |
var postCommitVolumes ledger.PostCommitVolumes | |
if tx.PostCommitVolumes != nil { | |
postCommitVolumes = tx.PostCommitVolumes.Copy() | |
} | |
dstMove := &ledger.Move{ | |
Account: posting.Destination, | |
Amount: (*bunpaginate.BigInt)(posting.Amount), | |
Asset: posting.Asset, | |
InsertionDate: tx.InsertedAt, | |
EffectiveDate: tx.Timestamp, | |
TransactionID: tx.ID, | |
} | |
if postCommitVolumes != nil { | |
if assets, ok := postCommitVolumes[posting.Destination]; ok { | |
if volume, ok := assets[posting.Asset]; ok { | |
dstMove.PostCommitVolumes = pointer.For(volume.Copy()) | |
} | |
} | |
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) | |
} | |
moves = append(moves, dstMove) |
ctx := bundebug.WithDebug(r.Context()) | ||
|
||
balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use debug context consistently.
The debug context is created but not used in LedgerFromContext
. This inconsistency could lead to loss of debug information.
Apply this diff to ensure consistent context usage:
ctx := bundebug.WithDebug(r.Context())
-balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq)
+balances, err := common.LedgerFromContext(ctx).GetAggregatedBalances(ctx, *rq)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
ctx := bundebug.WithDebug(r.Context()) | |
balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq) | |
ctx := bundebug.WithDebug(r.Context()) | |
balances, err := common.LedgerFromContext(ctx).GetAggregatedBalances(ctx, *rq) |
// FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction | ||
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix duplicate feature comment.
The comment for FeaturePostCommitVolumes
is incorrectly using the same comment text as FeatureIndexTransactionAccounts
.
- // FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction
+ // FeaturePostCommitVolumes is used to define if we want to compute post commit volumes of each transaction
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction | |
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES" | |
// FeaturePostCommitVolumes is used to define if we want to compute post commit volumes of each transaction | |
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES" |
-- todo: put in a dedicated migration to avoid lock the table for too long | ||
create index "accounts_volumes_ledger_accounts_address_asset_idx" | ||
on "{{.Schema}}"."accounts_volumes" (ledger, accounts_address, asset) include (input, output); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Migration needs restructuring to ensure zero-downtime deployment
The review concerns are valid. Current issues:
- Migration drops PK and creates index without CONCURRENTLY, risking table locks
- Similar index already exists from migration 25, potentially causing redundancy
- Missing rollback strategy in down migration
Recommendations:
- Split into separate migrations:
- First migration: Drop the primary key
- Second migration: Create the new index CONCURRENTLY
- Add corresponding DOWN migrations for rollback
- Verify if the existing index from migration 25 should be dropped
🔗 Analysis chain
Consider splitting the migration to minimize downtime.
The TODO comment suggests potential locking issues. Consider:
- Creating the index concurrently to minimize table locks
- Moving the index creation to a separate migration
- Adding a rollback strategy in the down migration
Additionally, verify if the included columns (input, output) are frequently accessed together in queries to justify the increased index size.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for queries using input/output columns
rg -A 5 'accounts_volumes.*input|accounts_volumes.*output'
Length of output: 1185
Script:
#!/bin/bash
# Check if migration 28 drops the old index
cat internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
Length of output: 410
// todo: adapt | ||
//require.Equalf(t, ledger.PostCommitVolumes{ | ||
// "world": { | ||
// "USD": { | ||
// Input: big.NewInt(0), | ||
// Output: big.NewInt(int64((i + 1) * 100)), | ||
// }, | ||
// }, | ||
// "bank": { | ||
// "USD": { | ||
// Input: big.NewInt(int64((i + 1) * 100)), | ||
// Output: big.NewInt(0), | ||
// }, | ||
// }, | ||
//}, txs[i].PostCommitVolumes, "checking tx %d", i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adapt the commented test assertions.
The commented assertions for post-commit volumes need to be updated. This could lead to:
- Missing validation of critical functionality
- Potential bugs in volume calculations going undetected
Please update the test assertions to match the new volume calculation logic or add a tracking issue if this is a known work in progress.
err := store.UpdateVolumes(ctx, ledger.AccountsVolumes{ | ||
Account: "world", | ||
Asset: "USD/2", | ||
Input: big.NewInt(0), | ||
Output: big.NewInt(100), | ||
}) | ||
require.NoError(t, err) | ||
require.Equal(t, ledger.PostCommitVolumes{ | ||
"world": { | ||
"USD/2": ledger.NewVolumesInt64(0, 100), | ||
}, | ||
}, volumes) | ||
|
||
volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{ | ||
Account: "world", | ||
Asset: "USD/2", | ||
Input: big.NewInt(50), | ||
Output: big.NewInt(0), | ||
}) | ||
require.NoError(t, err) | ||
require.Equal(t, ledger.PostCommitVolumes{ | ||
"world": { | ||
"USD/2": ledger.NewVolumesInt64(50, 100), | ||
}, | ||
}, volumes) | ||
|
||
volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{ | ||
Account: "world", | ||
Asset: "USD/2", | ||
Input: big.NewInt(50), | ||
Output: big.NewInt(50), | ||
}) | ||
require.NoError(t, err) | ||
require.Equal(t, ledger.PostCommitVolumes{ | ||
"world": { | ||
"USD/2": ledger.NewVolumesInt64(100, 150), | ||
}, | ||
}, volumes) | ||
// todo | ||
//require.Equal(t, ledger.PostCommitVolumes{ | ||
// "world": { | ||
// "USD/2": ledger.NewVolumesInt64(0, 100), | ||
// }, | ||
//}, volumes) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore volume validation in tests.
The test is missing validation for the returned volumes after the update operation.
Consider either:
- Updating the assertions to match the new volume calculation logic
- Adding new assertions that validate the current behavior
- Adding a comment explaining why the validation is temporarily disabled
//volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{ | ||
// Account: "world", | ||
// Asset: "USD/2", | ||
// Input: big.NewInt(50), | ||
// Output: big.NewInt(0), | ||
//}) | ||
//require.NoError(t, err) | ||
//require.Equal(t, ledger.PostCommitVolumes{ | ||
// "world": { | ||
// "USD/2": ledger.NewVolumesInt64(50, 100), | ||
// }, | ||
//}, volumes) | ||
|
||
//volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{ | ||
// Account: "world", | ||
// Asset: "USD/2", | ||
// Input: big.NewInt(50), | ||
// Output: big.NewInt(50), | ||
//}) | ||
//require.NoError(t, err) | ||
//require.Equal(t, ledger.PostCommitVolumes{ | ||
// "world": { | ||
// "USD/2": ledger.NewVolumesInt64(100, 150), | ||
// }, | ||
//}, volumes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update or remove commented test cases.
Multiple test cases for volume updates are commented out. This reduces test coverage for:
- Volume input/output calculations
- Sequential volume updates
- Edge cases in volume handling
Please either:
- Update these test cases to match the new implementation
- Add new test cases that cover the same scenarios
- Remove the commented code if it's no longer relevant
8df9f20
to
6fab2f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
internal/storage/ledger/accounts.go (1)
121-123
:⚠️ Potential issueRemove
.Time
access from time fields.This is a duplicate of a previous review comment. The time fields are already of type
time.Time
, accessing.Time
will cause compilation errors.Apply this diff:
- FirstUsage: from.FirstUsage.Time, - InsertionDate: from.InsertionDate.Time, - UpdatedAt: from.UpdatedAt.Time, + FirstUsage: from.FirstUsage, + InsertionDate: from.InsertionDate, + UpdatedAt: from.UpdatedAt,
🧹 Nitpick comments (3)
internal/storage/ledger/accounts.go (1)
129-134
: Consider using prepared statements for better performance.The current implementation builds conditions and arguments in a loop. Using prepared statements would be more efficient for multiple accounts.
internal/storage/ledger/resource_volumes.go (1)
83-87
: Consider adding an ORDER BY clause for deterministic array_agg results.When using
array_agg
with[1]
to get the first element, the results might be non-deterministic without an ORDER BY clause.Apply this diff:
- selectVolumes = selectVolumes.ColumnExpr("(array_agg(account_array))[1] as account_array") + selectVolumes = selectVolumes.ColumnExpr("(array_agg(account_array ORDER BY address))[1] as account_array") - selectVolumes = selectVolumes.ColumnExpr("(array_agg(metadata))[1] as metadata") + selectVolumes = selectVolumes.ColumnExpr("(array_agg(metadata ORDER BY address))[1] as metadata")internal/storage/ledger/volumes.go (1)
12-17
: Fix inconsistent metric name in tracing.The metric name "UpdateBalances" doesn't match the method name "UpdateVolumes", which could lead to confusion in monitoring and debugging.
- "UpdateBalances", + "UpdateVolumes",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (6)
.github/workflows/benchmark-comparison.yml
is excluded by!**/*.yml
go.mod
is excluded by!**/*.mod
go.sum
is excluded by!**/*.sum
,!**/*.sum
internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml
is excluded by!**/*.yaml
tools/generator/go.mod
is excluded by!**/*.mod
tools/generator/go.sum
is excluded by!**/*.sum
,!**/*.sum
📒 Files selected for processing (23)
internal/README.md
(99 hunks)internal/api/v2/controllers_balances.go
(2 hunks)internal/controller/ledger/controller_default.go
(2 hunks)internal/ledger_test.go
(1 hunks)internal/posting.go
(2 hunks)internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
(1 hunks)internal/storage/ledger/accounts.go
(2 hunks)internal/storage/ledger/accounts_test.go
(9 hunks)internal/storage/ledger/balances.go
(3 hunks)internal/storage/ledger/balances_test.go
(2 hunks)internal/storage/ledger/debug.go
(1 hunks)internal/storage/ledger/main_test.go
(1 hunks)internal/storage/ledger/moves_test.go
(2 hunks)internal/storage/ledger/resource_accounts.go
(2 hunks)internal/storage/ledger/resource_volumes.go
(2 hunks)internal/storage/ledger/transactions.go
(3 hunks)internal/storage/ledger/transactions_test.go
(1 hunks)internal/storage/ledger/volumes.go
(2 hunks)internal/storage/ledger/volumes_test.go
(2 hunks)pkg/features/features.go
(4 hunks)test/e2e/api_accounts_metadata_test.go
(1 hunks)test/e2e/app_lifecycle_test.go
(1 hunks)test/stress/stress_test.go
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- test/e2e/api_accounts_metadata_test.go
- internal/ledger_test.go
- internal/api/v2/controllers_balances.go
- internal/storage/ledger/debug.go
- internal/storage/ledger/transactions_test.go
- internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
- pkg/features/features.go
- internal/README.md
- test/e2e/app_lifecycle_test.go
- internal/storage/ledger/main_test.go
- internal/storage/ledger/volumes_test.go
- test/stress/stress_test.go
- internal/posting.go
- internal/controller/ledger/controller_default.go
- internal/storage/ledger/balances.go
- internal/storage/ledger/balances_test.go
- internal/storage/ledger/transactions.go
- internal/storage/ledger/accounts_test.go
- internal/storage/ledger/resource_accounts.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: BenchmarkCompare
🔇 Additional comments (4)
internal/storage/ledger/moves_test.go (1)
32-44
: LGTM! Well-structured test initialization.The account initialization with timestamps and metadata is properly implemented, ensuring consistent state for testing.
internal/storage/ledger/accounts.go (1)
136-162
: LGTM! Well-structured CTE query.The query construction using CTEs is well-organized and handles both updates and inserts efficiently.
internal/storage/ledger/resource_volumes.go (1)
64-71
: LGTM! Efficient volume aggregation.The use of sum aggregations with proper grouping improves query performance by reducing the number of rows processed.
internal/storage/ledger/volumes.go (1)
11-11
: Verify the impact of removing PostCommitVolumes from return type.This is a breaking change that might affect API consumers who rely on the returned volumes data.
Let's check for potential impacts:
✅ Verification successful
The removal of PostCommitVolumes return type is safe and intentional
This change is part of a planned migration controlled by the
POST_COMMIT_VOLUMES
feature flag. No code is directly consuming the returned volumes from UpdateVolumes, and volume tracking functionality remains intact in the system.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for direct usage of UpdateVolumes return values ast-grep --pattern 'volumes, err := $_.UpdateVolumes($$$)' # Search for any references to PostCommitVolumes type rg "PostCommitVolumes"Length of output: 11998
internal/storage/ledger/volumes.go
Outdated
return postgres.ResolveError(err) | ||
} | ||
|
||
ret := ledger.PostCommitVolumes{} | ||
for _, volumes := range accountVolumes { | ||
if _, ok := ret[volumes.Account]; !ok { | ||
ret[volumes.Account] = map[string]ledger.Volumes{} | ||
} | ||
ret[volumes.Account][volumes.Asset] = ledger.Volumes{ | ||
Input: volumes.Input, | ||
Output: volumes.Output, | ||
} | ||
} | ||
|
||
return ret, err | ||
return err | ||
}, | ||
) | ||
))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix redundant error return.
The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.
if err != nil {
- return postgres.ResolveError(err)
+ err = postgres.ResolveError(err)
}
- return err
+ return nil
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return postgres.ResolveError(err) | |
} | |
ret := ledger.PostCommitVolumes{} | |
for _, volumes := range accountVolumes { | |
if _, ok := ret[volumes.Account]; !ok { | |
ret[volumes.Account] = map[string]ledger.Volumes{} | |
} | |
ret[volumes.Account][volumes.Asset] = ledger.Volumes{ | |
Input: volumes.Input, | |
Output: volumes.Output, | |
} | |
} | |
return ret, err | |
return err | |
}, | |
) | |
))) | |
if err != nil { | |
err = postgres.ResolveError(err) | |
} | |
return nil | |
}, | |
))) |
6fab2f8
to
a59a280
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
internal/storage/ledger/volumes.go (1)
36-39
:⚠️ Potential issueFix error handling inconsistency.
The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.
Apply this diff to fix the error handling:
if err != nil { - return postgres.ResolveError(err) + err = postgres.ResolveError(err) } - return err + return nil🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by testsinternal/storage/ledger/accounts.go (1)
121-123
:⚠️ Potential issueRemove
.Time
access from time fields.The time fields in the source
ledger.Account
struct are already of typetime.Time
.Apply this diff:
- FirstUsage: from.FirstUsage.Time, - InsertionDate: from.InsertionDate.Time, - UpdatedAt: from.UpdatedAt.Time, + FirstUsage: from.FirstUsage, + InsertionDate: from.InsertionDate, + UpdatedAt: from.UpdatedAt,internal/storage/ledger/transactions.go (2)
123-139
:⚠️ Potential issuePrevent potential panic when accessing postCommitVolumes.
The code still has potential panic issues when accessing nested maps without checking if the keys exist.
Apply this diff to safely access
postCommitVolumes
:-if postCommitVolumes != nil { - dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy()) - postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) +if postCommitVolumes != nil { + if assets, ok := postCommitVolumes[posting.Destination]; ok { + if volume, ok := assets[posting.Asset]; ok { + dstMove.PostCommitVolumes = pointer.For(volume.Copy()) + } + } + postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) +}
141-154
:⚠️ Potential issuePrevent potential panic when accessing postCommitVolumes for source moves.
Similar to the destination moves, the source moves code has potential panic issues when accessing nested maps without checking if the keys exist.
Apply this diff to safely access
postCommitVolumes
:-if postCommitVolumes != nil { - srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy()) - postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount)) +if postCommitVolumes != nil { + if assets, ok := postCommitVolumes[posting.Source]; ok { + if volume, ok := assets[posting.Asset]; ok { + srcMove.PostCommitVolumes = pointer.For(volume.Copy()) + } + } + postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount)) +}
🧹 Nitpick comments (4)
internal/storage/ledger/resource_accounts.go (1)
173-174
: Consider database portability for volume calculations.The composite type cast
::volumes
is PostgreSQL-specific. Consider adding a comment explaining this dependency or abstracting it for better database portability.internal/storage/ledger/transactions.go (1)
55-96
: Consider using prepared statements for better security and performance.The SQL query construction is well-structured but could benefit from using prepared statements to prevent SQL injection and improve query plan caching.
Apply this diff to use prepared statements:
- selectVolumes := store.db.NewSelect(). + selectVolumes := store.db.NewRaw(` + SELECT asset, sum(input) as input, sum(output) as output, accounts_address + FROM ? as volumes + WHERE (?) OR (?) + GROUP BY accounts_address, asset + ORDER BY accounts_address, asset + `, store.GetPrefixedRelationName("accounts_volumes"), conditions, args...)🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by testsinternal/storage/ledger/accounts_test.go (2)
154-159
: Remove or guard debug statements.Debug statements should not be committed to the test code. Consider either:
- Removing these debug statements, or
- Guarding them behind a debug flag/environment variable.
- store.DumpTables(ctx, "transactions") - store.DumpQuery(ctx, store.GetDB().NewSelect(). - TableExpr(store.GetPrefixedRelationName("moves")). - Where("ledger = ?", store.GetLedger().Name). - Where("accounts_address = 'account:1'"). - Order("seq desc")) + if os.Getenv("DEBUG") == "true" { + store.DumpTables(ctx, "transactions") + store.DumpQuery(ctx, store.GetDB().NewSelect(). + TableExpr(store.GetPrefixedRelationName("moves")). + Where("ledger = ?", store.GetLedger().Name). + Where("accounts_address = 'account:1'"). + Order("seq desc")) + }
281-284
: Remove or guard debug statements.Similar to the previous debug statements, these should not be committed to the test code.
- if len(ret.Data) != 2 { - fmt.Println("#############") - store.DumpTables(ctx, "transactions", "accounts") - } + if os.Getenv("DEBUG") == "true" && len(ret.Data) != 2 { + store.DumpTables(ctx, "transactions", "accounts") + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (6)
.github/workflows/benchmark-comparison.yml
is excluded by!**/*.yml
go.mod
is excluded by!**/*.mod
go.sum
is excluded by!**/*.sum
,!**/*.sum
internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml
is excluded by!**/*.yaml
tools/generator/go.mod
is excluded by!**/*.mod
tools/generator/go.sum
is excluded by!**/*.sum
,!**/*.sum
📒 Files selected for processing (23)
internal/README.md
(99 hunks)internal/api/v2/controllers_balances.go
(2 hunks)internal/controller/ledger/controller_default.go
(2 hunks)internal/ledger_test.go
(1 hunks)internal/posting.go
(2 hunks)internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
(1 hunks)internal/storage/ledger/accounts.go
(2 hunks)internal/storage/ledger/accounts_test.go
(9 hunks)internal/storage/ledger/balances.go
(3 hunks)internal/storage/ledger/balances_test.go
(2 hunks)internal/storage/ledger/debug.go
(1 hunks)internal/storage/ledger/main_test.go
(1 hunks)internal/storage/ledger/moves_test.go
(2 hunks)internal/storage/ledger/resource_accounts.go
(2 hunks)internal/storage/ledger/resource_volumes.go
(2 hunks)internal/storage/ledger/transactions.go
(3 hunks)internal/storage/ledger/transactions_test.go
(1 hunks)internal/storage/ledger/volumes.go
(2 hunks)internal/storage/ledger/volumes_test.go
(2 hunks)pkg/features/features.go
(4 hunks)test/e2e/api_accounts_metadata_test.go
(1 hunks)test/e2e/app_lifecycle_test.go
(1 hunks)test/stress/stress_test.go
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
- internal/ledger_test.go
- internal/storage/ledger/debug.go
- internal/api/v2/controllers_balances.go
- internal/storage/ledger/main_test.go
- test/e2e/api_accounts_metadata_test.go
- internal/storage/ledger/transactions_test.go
- internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
- pkg/features/features.go
- internal/README.md
- test/stress/stress_test.go
- internal/posting.go
- internal/storage/ledger/volumes_test.go
- test/e2e/app_lifecycle_test.go
- internal/storage/ledger/balances_test.go
- internal/storage/ledger/resource_volumes.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
internal/storage/ledger/balances.go
[warning] 76-77: internal/storage/ledger/balances.go#L76-L77
Added lines #L76 - L77 were not covered by tests
internal/storage/ledger/volumes.go
[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests
internal/storage/ledger/transactions.go
[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by tests
internal/storage/ledger/accounts.go
[warning] 174-174: internal/storage/ledger/accounts.go#L174
Added line #L174 was not covered by tests
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: BenchmarkCompare
🔇 Additional comments (9)
internal/storage/ledger/balances.go (1)
28-29
: Great comment about lock ordering!The comment explaining the importance of consistent lock ordering for deadlock prevention is very helpful.
internal/storage/ledger/moves_test.go (1)
32-44
: Well-structured test with comprehensive coverage!The test effectively:
- Uses clear timestamps for move ordering
- Verifies post-commit volumes after each move
- Includes metadata handling
- Tests high concurrency scenarios for deadlock detection
internal/storage/ledger/accounts.go (1)
136-162
: Well-structured CTE query for account upserts.The query effectively:
- Creates a temporary table for accounts to upsert
- Identifies existing accounts needing updates
- Identifies non-existing accounts
- Combines results using UNION ALL
internal/storage/ledger/transactions.go (1)
40-49
: LGTM! Good error handling and feature flag check.The code correctly checks the feature flag and handles errors from the volume locking mechanism.
internal/storage/ledger/accounts_test.go (4)
333-336
: LGTM! Improved timestamp handling in tests.The changes properly initialize and verify the timestamp fields (FirstUsage, InsertionDate, UpdatedAt) in the test assertions.
Also applies to: 351-353
477-484
: LGTM! Proper timestamp initialization.The changes correctly initialize all timestamp fields (FirstUsage, InsertionDate, UpdatedAt) at the beginning of the test to ensure consistency.
Also applies to: 488-492
508-515
: LGTM! Good test coverage for future timestamp handling.The test properly verifies that account updates with future timestamps don't modify existing records, maintaining data integrity.
Also applies to: 519-527
529-550
: LGTM! Good test coverage for past timestamp handling.The test properly verifies that account updates with past timestamps correctly modify FirstUsage while maintaining the original InsertionDate and updating UpdatedAt.
internal/controller/ledger/controller_default.go (1)
454-465
: LGTM! Improved metadata and timestamp handling.The changes:
- Prevent nil pointer dereference by initializing empty metadata
- Ensure consistent timestamp handling by properly setting FirstUsage, InsertionDate, and UpdatedAt fields
if err := store.lockVolumes(ctx, query); err != nil { | ||
return nil, postgres.ResolveError(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Error handling test coverage needed for lockVolumes function
The review comment is valid. Current tests only cover successful operations, with no coverage for error scenarios in the lockVolumes function. Given that this function is used in critical paths (transactions and balances), error handling test coverage should be added.
Suggested test locations:
internal/storage/ledger/balances_test.go
internal/storage/ledger/transactions_test.go
🔗 Analysis chain
Add test coverage for error handling.
The error handling path in lockVolumes is not covered by tests.
Run this script to verify test coverage:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check test coverage for lockVolumes error handling
rg -A 5 "TestLockVolumes|lockVolumes.*error" test_*.go
Length of output: 108
Script:
#!/bin/bash
# First, find all test files in the ledger directory
echo "=== Test files in ledger directory ==="
fd -t f "test.*\.go$" internal/storage/ledger/
# Then search for lockVolumes related tests and error handling
echo -e "\n=== LockVolumes test coverage ==="
rg -A 5 "TestLockVolumes|lockVolumes.*error" internal/storage/ledger/
Length of output: 1093
Script:
#!/bin/bash
# Find where lockVolumes is called
echo "=== lockVolumes usage ==="
rg -p "lockVolumes" internal/storage/ledger/
# Search for related balance operation tests
echo -e "\n=== Balance operation tests ==="
rg -p "func Test.*Balance" internal/storage/ledger/balances_test.go
Length of output: 652
Script:
#!/bin/bash
# Check the content of balance-related tests
echo "=== Balance test implementations ==="
rg -A 20 "^func TestBalances" internal/storage/ledger/balances_test.go
# Check for any error handling in tests
echo -e "\n=== Error handling in tests ==="
rg "Error|err.*!= nil" internal/storage/ledger/balances_test.go
Length of output: 2203
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 76-77: internal/storage/ledger/balances.go#L76-L77
Added lines #L76 - L77 were not covered by tests
if err != nil { | ||
return fmt.Errorf("failed to aggregate post commit volumes: %w", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Add test coverage for error handling in finalAggregate.Scan
The codebase has good test coverage for successful scenarios, but lacks specific test cases for error handling of the finalAggregate.Scan
operation. Consider adding test cases for:
- Database scan errors
- Malformed data scenarios
- Context cancellation
🔗 Analysis chain
Add test coverage for error handling.
The error handling for the finalAggregate.Scan
operation lacks test coverage.
Run the following script to verify test coverage:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check test coverage for error handling.
# Test: Search for test cases covering the error handling.
rg -A 5 "failed to aggregate post commit volumes" internal/storage/ledger/*_test.go
Length of output: 330
Script:
#!/bin/bash
# Get the implementation context
ast-grep --pattern 'finalAggregate.Scan($$$)' internal/storage/ledger/transactions.go
Length of output: 87
Script:
#!/bin/bash
# Get more context around the error message and related test patterns
rg -B 5 -A 5 "failed to aggregate post commit volumes" internal/storage/ledger/
rg "Scan\(" internal/storage/ledger/transactions_test.go
Length of output: 917
Script:
#!/bin/bash
# Look for test cases that might be testing this functionality
rg -B 3 -A 3 "PostCommitVolumes" internal/storage/ledger/transactions_test.go
rg -B 3 -A 3 "finalAggregate" internal/storage/ledger/transactions_test.go
Length of output: 2864
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by tests
a59a280
to
46a0e18
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
internal/storage/ledger/volumes.go (1)
36-39
:⚠️ Potential issueFix redundant error return.
The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.
Apply this diff to fix the error handling:
if err != nil { - return postgres.ResolveError(err) + err = postgres.ResolveError(err) } - return err + return nil🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by testsinternal/storage/ledger/transactions.go (2)
123-139
:⚠️ Potential issueFix potential panic when accessing post-commit volumes.
The code accesses nested maps without checking if the keys exist, which could cause a runtime panic.
Apply this diff to safely access post-commit volumes:
-if postCommitVolumes != nil { - dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy()) - postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) -} +if postCommitVolumes != nil { + if assets, ok := postCommitVolumes[posting.Destination]; ok { + if volume, ok := assets[posting.Asset]; ok { + dstMove.PostCommitVolumes = pointer.For(volume.Copy()) + } + } + postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount)) +}
141-154
:⚠️ Potential issueFix potential panic when accessing post-commit volumes.
The code accesses nested maps without checking if the keys exist, which could cause a runtime panic.
Apply this diff to safely access post-commit volumes:
-if postCommitVolumes != nil { - srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy()) - postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount)) -} +if postCommitVolumes != nil { + if assets, ok := postCommitVolumes[posting.Source]; ok { + if volume, ok := assets[posting.Asset]; ok { + srcMove.PostCommitVolumes = pointer.For(volume.Copy()) + } + } + postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount)) +}
🧹 Nitpick comments (1)
internal/storage/ledger/transactions.go (1)
55-96
: Consider using query builder methods for SQL conditions.The current implementation uses string concatenation for SQL conditions, which could be error-prone and harder to maintain. Consider using the query builder's methods for constructing conditions.
Apply this diff to improve the query construction:
-conditions := make([]string, 0) -args := make([]any, 0) -for account, assets := range involvedVolumes { - for _, asset := range assets { - conditions = append(conditions, "ledger = ? and accounts_address = ? and asset = ?") - args = append(args, store.ledger.Name, account, asset) - } -} - -selectVolumes := store.db.NewSelect(). - // ... - Where("("+strings.Join(conditions, ") OR (")+")", args...) +selectVolumes := store.db.NewSelect() +for account, assets := range involvedVolumes { + for _, asset := range assets { + selectVolumes = selectVolumes.WhereGroup(" OR ", func(q *bun.SelectQuery) *bun.SelectQuery { + return q.Where("ledger = ? AND accounts_address = ? AND asset = ?", + store.ledger.Name, account, asset) + }) + } +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (6)
.github/workflows/benchmark-comparison.yml
is excluded by!**/*.yml
go.mod
is excluded by!**/*.mod
go.sum
is excluded by!**/*.sum
,!**/*.sum
internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml
is excluded by!**/*.yaml
tools/generator/go.mod
is excluded by!**/*.mod
tools/generator/go.sum
is excluded by!**/*.sum
,!**/*.sum
📒 Files selected for processing (23)
internal/README.md
(99 hunks)internal/api/v2/controllers_balances.go
(2 hunks)internal/controller/ledger/controller_default.go
(2 hunks)internal/ledger_test.go
(1 hunks)internal/posting.go
(2 hunks)internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
(1 hunks)internal/storage/ledger/accounts.go
(2 hunks)internal/storage/ledger/accounts_test.go
(9 hunks)internal/storage/ledger/balances.go
(3 hunks)internal/storage/ledger/balances_test.go
(2 hunks)internal/storage/ledger/debug.go
(1 hunks)internal/storage/ledger/main_test.go
(1 hunks)internal/storage/ledger/moves_test.go
(2 hunks)internal/storage/ledger/resource_accounts.go
(2 hunks)internal/storage/ledger/resource_volumes.go
(2 hunks)internal/storage/ledger/transactions.go
(3 hunks)internal/storage/ledger/transactions_test.go
(1 hunks)internal/storage/ledger/volumes.go
(2 hunks)internal/storage/ledger/volumes_test.go
(2 hunks)pkg/features/features.go
(4 hunks)test/e2e/api_accounts_metadata_test.go
(1 hunks)test/e2e/app_lifecycle_test.go
(1 hunks)test/stress/stress_test.go
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
- test/e2e/api_accounts_metadata_test.go
- internal/ledger_test.go
- internal/api/v2/controllers_balances.go
- internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
- internal/README.md
- pkg/features/features.go
- internal/storage/ledger/transactions_test.go
- internal/storage/ledger/debug.go
- internal/storage/ledger/volumes_test.go
- test/stress/stress_test.go
- internal/controller/ledger/controller_default.go
- internal/posting.go
- internal/storage/ledger/main_test.go
- test/e2e/app_lifecycle_test.go
- internal/storage/ledger/balances_test.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
internal/storage/ledger/volumes.go
[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests
internal/storage/ledger/accounts.go
[warning] 174-174: internal/storage/ledger/accounts.go#L174
Added line #L174 was not covered by tests
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: BenchmarkCompare
🔇 Additional comments (16)
internal/storage/ledger/balances.go (2)
19-37
: LGTM! Well-designed locking mechanism.The
lockVolumes
function implementation is robust:
- Generates consistent lock keys to prevent deadlocks
- Uses sorted keys to ensure a consistent locking order
- Properly handles errors
75-77
: LGTM! Good integration of volume locking.The
GetBalances
function now properly locks volumes before querying to prevent race conditions.internal/storage/ledger/moves_test.go (1)
32-44
: LGTM! Proper initialization of Account fields.The Account initialization correctly includes all required fields with appropriate timestamps and metadata.
internal/storage/ledger/accounts.go (3)
105-115
: LGTM! Well-structured Account definition.The Account struct is well-defined with appropriate field tags and types.
129-170
: LGTM! Well-implemented upsert logic.The CTE-based upsert implementation is robust:
- Properly handles existing and new accounts
- Correctly merges metadata
- Maintains first_usage integrity
121-123
:⚠️ Potential issueRemove
.Time
access from time fields.The time fields in the source
ledger.Account
struct are already of typetime.Time
. Accessing.Time
on these fields will cause compilation errors.Apply this diff:
- FirstUsage: from.FirstUsage.Time, - InsertionDate: from.InsertionDate.Time, - UpdatedAt: from.UpdatedAt.Time, + FirstUsage: from.FirstUsage, + InsertionDate: from.InsertionDate, + UpdatedAt: from.UpdatedAt,Likely invalid or redundant comment.
internal/storage/ledger/resource_accounts.go (2)
110-111
: LGTM! Improved balance calculation.The balance calculation now correctly uses aggregate functions with proper grouping.
173-174
: LGTM! Improved volumes calculation.The volumes calculation now correctly uses aggregate functions with proper grouping.
internal/storage/ledger/resource_volumes.go (3)
64-67
: LGTM! Performance improvement using aggregation.The changes improve query performance by using aggregate functions and grouping, reducing the number of rows returned from the database.
Also applies to: 71-71
83-83
: LGTM! Performance improvement using array aggregation.The changes improve query performance by using array_agg to aggregate account arrays and selecting only the first element.
87-87
: LGTM! Performance improvement using array aggregation.The changes improve query performance by using array_agg to aggregate metadata and selecting only the first element.
internal/storage/ledger/transactions.go (1)
40-49
: LGTM! Added debugging context and volume locking.The changes improve debugging capabilities and prevent race conditions by adding debug context and volume locking for post-commit volumes.
internal/storage/ledger/accounts_test.go (4)
154-160
: LGTM! Added debug statements for test failures.The changes improve test debugging capabilities by adding DumpTables and DumpQuery calls.
281-284
: LGTM! Added debug statements for test failures.The changes improve test debugging capabilities by adding DumpTables call when the test fails.
477-484
: LGTM! Improved test data initialization.The changes improve test consistency by initializing timestamps at the beginning and using the same timestamp for all fields.
508-550
: LGTM! Added comprehensive test cases for account upsert.The changes improve test coverage by adding test cases for:
- Upsert with no modification
- Upsert with modification
- Verification of account state after each operation
46a0e18
to
cf55177
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
internal/storage/ledger/accounts.go (1)
121-123
:⚠️ Potential issueRemove
.Time
access from time fields in account mapping.The time fields in the source
ledger.Account
struct are already of typetime.Time
. Accessing.Time
on these fields will cause compilation errors.Apply this fix:
- FirstUsage: from.FirstUsage.Time, - InsertionDate: from.InsertionDate.Time, - UpdatedAt: from.UpdatedAt.Time, + FirstUsage: from.FirstUsage, + InsertionDate: from.InsertionDate, + UpdatedAt: from.UpdatedAt,
🧹 Nitpick comments (3)
internal/storage/ledger/moves_test.go (1)
32-32
: Make test deterministic by using fixed timestamps.Using
time.Now()
in tests can make them non-deterministic. Consider using fixed timestamps to ensure consistent test behavior.Example:
-now := time.Now() +now := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)internal/storage/ledger/balances.go (1)
28-29
: Enhance comment about lock ordering.While the comment about sorting is helpful, it could be more descriptive about why consistent ordering prevents deadlocks.
Suggested comment:
-// notes(gfyrag): Keep order, it ensures consistent locking order and limit deadlocks +// Sort lock keys to ensure a consistent global ordering across transactions. +// This prevents deadlocks by ensuring all transactions acquire locks in the same order, +// avoiding circular wait conditions where transaction A waits for B while B waits for A.internal/controller/ledger/controller_default.go (1)
454-465
: Consider improvements to timestamp and metadata handling.The changes look good but could be enhanced:
- Consider using a constant empty map for nil metadata to avoid allocations:
var emptyMetadata = make(metadata.Metadata)
- Consider accepting custom timestamps through parameters for testing purposes.
Apply this diff to implement the suggestions:
+var emptyMetadata = make(metadata.Metadata) + func (ctrl *DefaultController) saveAccountMetadata(ctx context.Context, store Store, parameters Parameters[SaveAccountMetadata]) (*ledger.SavedMetadata, error) { metadata := parameters.Input.Metadata if metadata == nil { - metadata = make(map[string]string) + metadata = emptyMetadata } - now := time.Now() + now := parameters.Input.Timestamp + if now.IsZero() { + now = time.Now() + } if err := store.UpsertAccounts(ctx, &ledger.Account{ Address: parameters.Input.Address, Metadata: metadata,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (7)
.github/workflows/benchmark-comparison.yml
is excluded by!**/*.yml
go.mod
is excluded by!**/*.mod
go.sum
is excluded by!**/*.sum
,!**/*.sum
internal/storage/bucket/migrations/28-create-accounts-volumes-index/notes.yaml
is excluded by!**/*.yaml
internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/notes.yaml
is excluded by!**/*.yaml
tools/generator/go.mod
is excluded by!**/*.mod
tools/generator/go.sum
is excluded by!**/*.sum
,!**/*.sum
📒 Files selected for processing (25)
internal/README.md
(99 hunks)internal/api/v2/controllers_balances.go
(2 hunks)internal/controller/ledger/controller_default.go
(2 hunks)internal/ledger_test.go
(1 hunks)internal/posting.go
(2 hunks)internal/storage/bucket/migrations/28-create-accounts-volumes-index/up.sql
(1 hunks)internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/up.sql
(1 hunks)internal/storage/ledger/accounts.go
(2 hunks)internal/storage/ledger/accounts_test.go
(7 hunks)internal/storage/ledger/balances.go
(3 hunks)internal/storage/ledger/balances_test.go
(2 hunks)internal/storage/ledger/debug.go
(1 hunks)internal/storage/ledger/main_test.go
(1 hunks)internal/storage/ledger/moves_test.go
(2 hunks)internal/storage/ledger/resource.go
(1 hunks)internal/storage/ledger/resource_accounts.go
(2 hunks)internal/storage/ledger/resource_volumes.go
(5 hunks)internal/storage/ledger/transactions.go
(3 hunks)internal/storage/ledger/transactions_test.go
(1 hunks)internal/storage/ledger/volumes.go
(1 hunks)internal/storage/ledger/volumes_test.go
(2 hunks)pkg/features/features.go
(4 hunks)test/e2e/api_accounts_metadata_test.go
(1 hunks)test/e2e/app_lifecycle_test.go
(1 hunks)test/stress/stress_test.go
(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- internal/storage/ledger/resource.go
- internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/up.sql
🚧 Files skipped from review as they are similar to previous changes (14)
- internal/ledger_test.go
- pkg/features/features.go
- test/e2e/api_accounts_metadata_test.go
- internal/README.md
- internal/posting.go
- internal/storage/ledger/transactions_test.go
- test/stress/stress_test.go
- internal/storage/ledger/balances_test.go
- internal/storage/ledger/debug.go
- internal/api/v2/controllers_balances.go
- internal/storage/ledger/volumes_test.go
- internal/storage/ledger/transactions.go
- internal/storage/ledger/main_test.go
- internal/storage/ledger/resource_volumes.go
👮 Files not reviewed due to content moderation or server errors (2)
- internal/storage/ledger/accounts_test.go
- internal/controller/ledger/controller_default.go
🔇 Additional comments (22)
test/e2e/app_lifecycle_test.go (1)
113-113
: LGTM! Improved transaction locking strategy.The change from checking INSERT operations to advisory locks is a good improvement as it:
- Provides a more reliable way to track transaction processing
- Reduces table-level locking contention
- Aligns with PostgreSQL best practices for concurrency control
Let's verify the advisory lock usage in the codebase:
✅ Verification successful
✅ Advisory lock verification successful
The test change correctly reflects the codebase's transaction processing strategy, which uses PostgreSQL advisory locks for concurrency control. This is consistently implemented across the codebase and well-documented in CONTRIBUTING.md.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the advisory lock implementation # Check for advisory lock usage in the codebase rg -A 5 'pg_advisory.*lock' # Check for any remaining direct INSERT operations on accounts rg -A 5 'INSERT INTO.*accounts'Length of output: 3552
internal/storage/ledger/resource_accounts.go (2)
110-111
: LGTM! Improved balance calculation using aggregates.The change to use
sum
aggregates for input and output calculations provides a more accurate way to compute balances, especially when dealing with multiple transactions per asset.
173-174
: LGTM! Consider monitoring query performance.The implementation correctly uses aggregate functions with proper grouping for volume calculations. The approach aligns well with the append-only strategy for balances.
Since the query groups by both
accounts_address
andasset
, consider monitoring query performance with large datasets to ensure optimal execution.Run the following script to check the query execution plan:
✅ Verification successful
Performance is well-optimized with existing indexes
The codebase already includes appropriate indexes on the accounts_volumes table that cover both the grouping columns (accounts_address, asset) and the aggregated columns (input, output). The query will perform efficiently even with large datasets.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if there are any indexes on accounts_volumes table that could help with the grouping operation # Look for CREATE INDEX statements in migration files rg -A 2 'CREATE.*INDEX.*accounts_volumes' --type sql # Look for index definitions in schema files rg -A 2 'CREATE.*INDEX.*accounts_volumes' --type goLength of output: 108
Script:
#!/bin/bash # Cast a wider net to find relevant table and index definitions # Look for table creation statements related to volumes rg -i "create.*table.*volume" --type sql -A 5 # Search for migration files containing volume-related changes fd -e sql -e go | xargs rg -l "volume" # Look for any index definitions in the codebase rg -i "create.*index.*volume" --type sqlLength of output: 4268
internal/storage/bucket/migrations/28-create-accounts-volumes-index/up.sql (1)
1-2
: LGTM! Well-designed index creation.The concurrent index creation with included columns is well structured:
- Using
create index concurrently
minimizes impact on production traffic- Column order (ledger, accounts_address, asset) aligns with query patterns
- Including input/output columns optimizes query performance
internal/storage/ledger/volumes.go (1)
40-50
:⚠️ Potential issueFix redundant error return.
The error is resolved on line 45 but then the unresolved error is returned on line 48, which could lose the resolution context.
Apply this fix:
if err != nil { - return postgres.ResolveError(err) + err = postgres.ResolveError(err) } - return err + return nilLikely invalid or redundant comment.
internal/storage/ledger/balances.go (1)
84-86
: Add test coverage for lockVolumes error handling.The error handling path in lockVolumes is not covered by tests. Given that this function is used in critical paths (transactions and balances), error handling test coverage should be added.
Run this script to verify test coverage:
✅ Verification successful
The review comment is correct - lockVolumes error handling lacks test coverage
The function's error paths are not tested, specifically:
- PostgreSQL advisory lock acquisition failures
- Database connection errors during lock operations
- Transaction rollback behavior when locks fail
This verification confirms that test coverage should be added given the critical nature of this function in transaction processing and balance management.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check test coverage for lockVolumes error handling rg -A 5 "TestLockVolumes|lockVolumes.*error" test_*.goLength of output: 108
Script:
#!/bin/bash # First, find the implementation of lockVolumes ast-grep --pattern 'func $_ lockVolumes($$$) { $$$ }' # Then check for test files and coverage cd internal/storage/ledger rg -A 5 "TestLockVolumes|lockVolumes.*error" .Length of output: 527
Script:
#!/bin/bash # Get the complete implementation of lockVolumes cd internal/storage/ledger rg -A 15 "^func.*lockVolumes" . # Look for indirect test coverage through transaction/balance tests rg -l "Test.*Balance|Test.*Transaction" .Length of output: 957
Script:
#!/bin/bash cd internal/storage/ledger # Check test files for database error simulation or lock-related tests rg -A 10 "mock.*DB|fake.*DB|pg_advisory_xact_lock|test.*lock" balances_test.go transactions_test.go # Look for error handling in tests rg "Error|err.*nil" balances_test.go transactions_test.goLength of output: 4409
internal/storage/ledger/accounts_test.go (12)
153-153
: LGTM!The added empty line improves readability by separating test cases.
224-224
: LGTM!The added empty line improves readability by separating test cases.
322-325
: LGTM! Good improvement in test readability.Storing the timestamp in a variable makes the test more explicit and maintainable.
Line range hint
466-539
: LGTM! Comprehensive test coverage for account timestamp handling.The test cases effectively verify the timestamp behavior during account upsert operations:
- Validates that FirstUsage is set to the earliest timestamp
- Confirms that InsertionDate remains unchanged after initial creation
- Verifies that UpdatedAt reflects the latest modification
153-153
: LGTM!The added empty line improves test case separation and readability.
224-224
: LGTM!The added empty line improves test case separation and readability.
322-325
: LGTM!The timestamp variable improves code maintainability and makes the test's intent clearer by explicitly showing the historical nature of the transaction.
Line range hint
466-539
: Well-structured test cases with comprehensive coverage!The test cases effectively verify the account upsert behavior:
- Initial insert with current timestamp
- Upsert with future timestamp (no modification)
- Upsert with past timestamp (with modification)
Each case properly validates the account state from the database, ensuring the timestamp fields (
FirstUsage
,InsertionDate
,UpdatedAt
) are handled correctly.
153-153
: LGTM!The added empty line improves readability by separating test cases.
224-224
: LGTM!The added empty line improves readability by separating test cases.
322-325
: LGTM!The introduction of
tx2Timestamp
variable improves code maintainability by storing the timestamp in a reusable variable.
Line range hint
466-539
: LGTM!The test cases provide comprehensive coverage for timestamp handling during account upsert:
- Verifies that
FirstUsage
is not modified when upserting with a future timestamp- Confirms that
FirstUsage
is updated when upserting with an earlier timestamp- Ensures that
InsertionDate
remains unchanged whileUpdatedAt
is modified during upsertinternal/controller/ledger/controller_default.go (4)
294-294
: LGTM!Minor formatting improvement.
454-458
: LGTM! Good defensive programming.The nil check and initialization of the metadata map prevent potential nil pointer dereferences, following Go best practices.
459-465
: LGTM! Consistent timestamp handling.The code captures the current time once and uses it consistently across all timestamp fields (
InsertionDate
,UpdatedAt
,FirstUsage
), ensuring temporal consistency in the account metadata.
294-294
: LGTM!The whitespace adjustment improves code formatting.
metadata := parameters.Input.Metadata | ||
if metadata == nil { | ||
metadata = make(map[string]string) | ||
} | ||
|
||
now := time.Now() | ||
if err := store.UpsertAccounts(ctx, &ledger.Account{ | ||
Address: parameters.Input.Address, | ||
Metadata: parameters.Input.Metadata, | ||
Address: parameters.Input.Address, | ||
Metadata: metadata, | ||
InsertionDate: now, | ||
UpdatedAt: now, | ||
FirstUsage: now, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider retrieving FirstUsage from transaction history.
Setting FirstUsage
to the current time during metadata save might not accurately reflect when the account was first used in a transaction. Consider retrieving the actual first usage timestamp from the transaction history.
Apply this diff to improve FirstUsage
handling:
metadata := parameters.Input.Metadata
if metadata == nil {
metadata = make(map[string]string)
}
now := time.Now()
+// Get the earliest transaction timestamp for this account
+firstUsage, err := store.Transactions().GetOne(ctx, ledgercontroller.ResourceQuery[any]{
+ Builder: query.Match("postings.source", parameters.Input.Address).
+ Or(query.Match("postings.destination", parameters.Input.Address)),
+ Order: pointer.For(bunpaginate.Order(bunpaginate.OrderAsc)),
+})
+if err != nil && !errors.Is(err, postgres.ErrNotFound) {
+ return nil, fmt.Errorf("failed to get first usage: %w", err)
+}
+
+firstUsageTime := now
+if firstUsage != nil {
+ firstUsageTime = firstUsage.Timestamp
+}
+
if err := store.UpsertAccounts(ctx, &ledger.Account{
Address: parameters.Input.Address,
Metadata: metadata,
InsertionDate: now,
UpdatedAt: now,
- FirstUsage: now,
+ FirstUsage: firstUsageTime,
}); err != nil {
Committable suggestion skipped: line range outside the PR's diff.
No description provided.