Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use append only on balances #655

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

gfyrag
Copy link
Contributor

@gfyrag gfyrag commented Jan 17, 2025

No description provided.

Copy link

coderabbitai bot commented Jan 17, 2025

Walkthrough

This pull request introduces several enhancements to the Ledger system, focusing on volume tracking, account metadata, and database operations. Key changes include adding a new method InvolvedVolumes to track asset volumes across accounts, improving account metadata handling with timestamp fields, and refining volume and balance calculation logic. The changes also introduce a new feature flag FeaturePostCommitVolumes and modify various database query and testing strategies to support more robust and flexible ledger operations.

Changes

File Change Summary
internal/posting.go Added InvolvedVolumes() method to Postings type to track unique assets for accounts
internal/storage/ledger/accounts.go Introduced new Account struct with extended metadata and timestamp fields
internal/storage/ledger/balances.go Added lockVolumes method and modified balance retrieval logic
internal/storage/ledger/volumes.go Updated UpdateVolumes method to focus on error handling
pkg/features/features.go Added new feature constant FeaturePostCommitVolumes
SQL Migrations Added new index on accounts_volumes table and dropped primary key constraint

Suggested Labels

build-images, feature-enhancement, database-migration

Suggested Reviewers

  • flemzord
  • laouji
  • paul-nicolas

Poem

🐰 Volumes dance, accounts take flight,
Metadata timestamps shine so bright!
New features hop across the code,
With volumes tracked and errors slowed.
A ledger's magic, rabbit-style delight! 🌟

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Jan 17, 2025

Codecov Report

Attention: Patch coverage is 84.01254% with 51 lines in your changes missing coverage. Please review.

Project coverage is 81.07%. Comparing base (c10b32b) to head (cf55177).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
internal/storage/ledger/balances.go 48.23% 40 Missing and 4 partials ⚠️
internal/storage/ledger/transactions.go 96.20% 2 Missing and 1 partial ⚠️
internal/storage/ledger/volumes.go 91.89% 2 Missing and 1 partial ⚠️
internal/storage/ledger/accounts.go 98.36% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #655      +/-   ##
==========================================
- Coverage   81.71%   81.07%   -0.64%     
==========================================
  Files         131      131              
  Lines        7059     7323     +264     
==========================================
+ Hits         5768     5937     +169     
- Misses        990     1077      +87     
- Partials      301      309       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gfyrag gfyrag marked this pull request as ready for review January 17, 2025 18:44
@gfyrag gfyrag requested a review from a team as a code owner January 17, 2025 18:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (10)
internal/ledger_test.go (1)

12-12: Ensure deterministic ordering in feature set string representation to prevent flaky tests.

The test assertion on line 12 compares the string representation of the feature set. Since map iteration is not ordered in Go, the output string may vary, causing the test to fail intermittently. Consider sorting the features before generating the string representation to make the test result consistent.

Apply this diff to sort features in the String() method:

 func (f FeatureSet) String() string {
-    var features []string
+    var featuresList []string
     for feature, state := range f {
-        features = append(features, fmt.Sprintf("%s=%s", feature.Short(), state))
+        featuresList = append(featuresList, fmt.Sprintf("%s=%s", feature.Short(), state))
     }
-    return strings.Join(features, ",")
+    sort.Strings(featuresList)
+    return strings.Join(featuresList, ",")
 }
internal/storage/ledger/volumes.go (1)

36-39: Remove redundant error return.

The error is already being resolved and returned at line 36. The additional return at line 39 is unnecessary.

Apply this diff to remove the redundant error return:

 if err != nil {
   return postgres.ResolveError(err)
 }
-return err
+return nil
internal/posting.go (2)

67-69: Optimize map initialization and add early return.

Consider optimizing the map initialization and adding an early return for empty postings.

Apply this diff to optimize the function:

 func (p Postings) InvolvedVolumes() map[string][]string {
+    if len(p) == 0 {
+        return map[string][]string{}
+    }
-    ret := make(map[string][]string)
+    ret := make(map[string][]string, len(p)*2) // Pre-allocate for source and destination

76-77: Consider deduplicating assets during insertion.

For large datasets, it might be more efficient to deduplicate assets during insertion rather than at the end.

Here's an alternative implementation using a map for O(1) lookups:

-ret[posting.Source] = append(ret[posting.Source], posting.Asset)
-ret[posting.Destination] = append(ret[posting.Destination], posting.Asset)
+// Use map for O(1) lookups during insertion
+if _, exists := ret[posting.Source]; !exists {
+    ret[posting.Source] = make(map[string]struct{})
+}
+ret[posting.Source][posting.Asset] = struct{}{}
+
+if _, exists := ret[posting.Destination]; !exists {
+    ret[posting.Destination] = make(map[string]struct{})
+}
+ret[posting.Destination][posting.Asset] = struct{}{}

Then convert maps to sorted slices at the end:

result := make(map[string][]string, len(ret))
for account, assets := range ret {
    slice := make([]string, 0, len(assets))
    for asset := range assets {
        slice = append(slice, asset)
    }
    slices.Sort(slice)
    result[account] = slice
}
return result
internal/storage/ledger/balances.go (2)

19-37: LGTM! Consider using a more robust key generation method.

The implementation correctly prevents deadlocks by sorting lock keys. However, consider using a more robust key generation method to avoid potential collisions.

-			lockKeys = append(lockKeys, fmt.Sprintf("%s-%s-%s", store.ledger.Name, account, asset))
+			lockKeys = append(lockKeys, fmt.Sprintf("ledger:%s:account:%s:asset:%s", store.ledger.Name, account, asset))

75-86: LGTM! Consider adding an index for performance optimization.

The implementation correctly handles volume locking and uses efficient aggregate functions. However, consider adding an index to optimize the grouped query.

Add the following index to improve query performance:

CREATE INDEX idx_accounts_volumes_ledger_address_asset ON accounts_volumes (ledger, accounts_address, asset);
internal/storage/ledger/resource_accounts.go (1)

110-111: LGTM! Consider adding indexes for aggregation performance.

The implementation correctly uses aggregate functions for balance and volume calculations. However, consider adding indexes to optimize the grouped queries.

Add the following indexes to improve aggregation performance:

CREATE INDEX idx_accounts_volumes_group ON accounts_volumes (ledger, accounts_address, asset, input, output);
CREATE INDEX idx_moves_group ON moves (ledger, accounts_address, asset, effective_date);

Also applies to: 173-174

internal/storage/ledger/accounts_test.go (2)

154-160: Added debugging support for test failures.

The addition of table and query dumps will help diagnose test failures. However, consider using a test helper function to encapsulate this debugging logic.

+func dumpDebugInfo(t *testing.T, store *Store, ctx context.Context) {
+    t.Helper()
+    store.DumpTables(ctx, "transactions")
+    store.DumpQuery(ctx, store.GetDB().NewSelect().
+        TableExpr(store.GetPrefixedRelationName("moves")).
+        Where("ledger = ?", store.GetLedger().Name).
+        Where("accounts_address = 'account:1'").
+        Order("seq desc"))
+}

281-284: Consider using test helper for debugging.

Similar debugging code appears in multiple places. Consider using a shared helper function.

internal/controller/ledger/controller_default.go (1)

459-465: Consider using a single timestamp for all fields.

The code sets the same timestamp (now) for InsertionDate, UpdatedAt, and FirstUsage. Consider:

  1. Using different timestamps based on the actual first usage
  2. Making the timestamp fields more explicit about their purpose
-	now := time.Now()
-	if err := store.UpsertAccounts(ctx, &ledger.Account{
-		Address:       parameters.Input.Address,
-		Metadata:      metadata,
-		InsertionDate: now,
-		UpdatedAt:     now,
-		FirstUsage:    now,
+	currentTime := time.Now()
+	if err := store.UpsertAccounts(ctx, &ledger.Account{
+		Address:       parameters.Input.Address,
+		Metadata:      metadata,
+		InsertionDate: currentTime,  // When the account is first created
+		UpdatedAt:     currentTime,  // When the metadata was last modified
+		FirstUsage:    currentTime,  // When the account was first used in a transaction
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a4b9a5 and 8df9f20.

⛔ Files ignored due to path filters (6)
  • .github/workflows/benchmark-comparison.yml is excluded by !**/*.yml
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml is excluded by !**/*.yaml
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
📒 Files selected for processing (23)
  • internal/README.md (99 hunks)
  • internal/api/v2/controllers_balances.go (2 hunks)
  • internal/controller/ledger/controller_default.go (2 hunks)
  • internal/ledger_test.go (1 hunks)
  • internal/posting.go (2 hunks)
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1 hunks)
  • internal/storage/ledger/accounts.go (2 hunks)
  • internal/storage/ledger/accounts_test.go (9 hunks)
  • internal/storage/ledger/balances.go (3 hunks)
  • internal/storage/ledger/balances_test.go (2 hunks)
  • internal/storage/ledger/debug.go (1 hunks)
  • internal/storage/ledger/main_test.go (1 hunks)
  • internal/storage/ledger/moves_test.go (2 hunks)
  • internal/storage/ledger/resource_accounts.go (2 hunks)
  • internal/storage/ledger/resource_volumes.go (2 hunks)
  • internal/storage/ledger/transactions.go (3 hunks)
  • internal/storage/ledger/transactions_test.go (2 hunks)
  • internal/storage/ledger/volumes.go (2 hunks)
  • internal/storage/ledger/volumes_test.go (2 hunks)
  • pkg/features/features.go (4 hunks)
  • test/e2e/api_accounts_metadata_test.go (1 hunks)
  • test/e2e/app_lifecycle_test.go (1 hunks)
  • test/stress/stress_test.go (2 hunks)
✅ Files skipped from review due to trivial changes (2)
  • test/e2e/api_accounts_metadata_test.go
  • test/stress/stress_test.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Tests
  • GitHub Check: BenchmarkCompare
🔇 Additional comments (16)
internal/storage/ledger/debug.go (1)

17-18: LGTM! Good security improvement.

Adding the ledger filter prevents potential data leakage across different ledgers during debugging. This is a good defensive programming practice.

Let's verify that this change doesn't break existing debug calls:

✅ Verification successful

Verified: The ledger filter addition is safe and consistent

The change aligns with existing patterns in the codebase where debug queries are already ledger-specific. All current usages in tests will continue to work as expected.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for existing DumpTables calls that might be affected

# Look for test files using DumpTables
rg -A 3 "DumpTables" --type go

# Look for potential debug statements that might rely on cross-ledger data
ast-grep --pattern 'DumpTables($$$)'

Length of output: 4900

internal/storage/ledger/accounts.go (2)

105-115: Verify the correctness of struct tags in the Account struct.

The Account struct (lines 105-115) uses bun ORM struct tags. Please ensure that the struct tags align with the database schema and bun ORM expectations. Specifically, verify the default:'{}' syntax in the Metadata field (line 109) for correctness.


132-134: ⚠️ Potential issue

Ensure safe construction of SQL query conditions to prevent SQL injection.

In the loop (lines 132-134), SQL query conditions are constructed by concatenating strings. While the arguments are passed separately, it's important to ensure that all placeholders ? in the condition string are correctly mapped to the corresponding arguments in argsForUpdate. Misalignment can lead to SQL injection vulnerabilities or runtime errors.

[security]

internal/storage/ledger/transactions.go (1)

137-138: Verify the correctness of volume adjustments in postCommitVolumes.

When adjusting postCommitVolumes in lines 137-138, you're negating posting.Amount before passing it to AddInput and AddOutput. Ensure that negating the amount aligns with the expected logic of volume adjustments in your ledger system to avoid incorrect balances.

internal/storage/ledger/main_test.go (1)

46-46: LGTM!

Always adding the query hook in tests ensures consistent debugging capabilities across all test runs.

internal/storage/ledger/moves_test.go (1)

40-44: LGTM! Account initialization properly includes new timestamp fields.

The Account struct initialization correctly includes all required fields with appropriate timestamp values.

internal/storage/ledger/resource_volumes.go (2)

64-67: Improved query performance with aggregations and proper grouping.

The changes optimize the query by:

  1. Using sum aggregation for input/output calculations
  2. Adding proper grouping by accounts_address and asset

Also applies to: 71-71


83-83: Enhanced consistency in array handling.

Using array_agg with [1] index selection ensures consistent results when retrieving account arrays and metadata.

Also applies to: 87-87

test/e2e/app_lifecycle_test.go (1)

113-113: Improved transaction state monitoring using advisory locks.

The change from monitoring INSERT operations to advisory locks provides a more reliable way to track transaction states, as advisory locks are explicitly managed and less prone to false positives.

internal/storage/ledger/balances_test.go (2)

32-48: Enhanced test setup with proper account initialization.

The changes improve test reliability by properly initializing the world account with timestamps and volumes.


62-78: Consistent test setup across test cases.

The same initialization pattern is applied to the concurrent access test, ensuring consistency across test cases.

internal/storage/ledger/accounts_test.go (2)

477-484: Improved account initialization with consistent timestamps.

The changes ensure that accounts are initialized with consistent timestamps, improving test reliability.

Also applies to: 488-492


508-515: Enhanced account upsert test cases.

The test now properly verifies:

  1. Upsert behavior with future timestamps
  2. Retrieval of existing account data
  3. Modification of account fields

Also applies to: 519-550

internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1)

1-2: ⚠️ Potential issue

Verify data integrity after primary key removal.

Dropping the primary key constraint from accounts_volumes is a significant change. Ensure that:

  1. No duplicate rows can be inserted
  2. Row uniqueness is maintained through application logic
  3. Performance impact on write operations is acceptable

Run this script to check for any existing duplicate rows that might cause issues:

✅ Verification successful

Primary key removal appears intentional - uniqueness no longer required

The migration sequence shows this is a planned change:

  1. Migration Bump machine to latest version #25 first created the replacement index concurrently
  2. Migration Improve exec error display #28 removes PK and creates final index structure
  3. The new index maintains query performance by covering key columns

The removal of uniqueness constraints appears to be an intentional design decision, not a data integrity concern.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential duplicate rows in accounts_volumes
rg -A 5 'accounts_volumes.*insert|accounts_volumes.*update' 

Length of output: 61


Script:

#!/bin/bash
# Find table definition and related files
echo "=== Table Definition ==="
fd -e sql accounts_volumes
echo -e "\n=== Table References ==="
rg -l "accounts_volumes"
echo -e "\n=== Entity Definitions ==="
rg -l "AccountsVolume|AccountVolume"

Length of output: 1364


Script:

#!/bin/bash
echo "=== Migration #28 (Current) ==="
cat internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql

echo -e "\n=== Migration #25 (Previous Index) ==="
cat internal/storage/bucket/migrations/25-accounts-volumes-index/up.sql

echo -e "\n=== Entity Definition ==="
rg -A 10 "type AccountsVolume struct|type AccountVolume struct"

Length of output: 922

internal/controller/ledger/controller_default.go (1)

454-457: LGTM! Defensive initialization of metadata.

Good practice to initialize metadata as an empty map if nil.

internal/README.md (1)

748-754: Documentation for InvolvedVolumes() could be more descriptive.

While the method signature is clear, adding a description of the method's purpose and return value format would improve the documentation. Consider documenting:

  • The purpose of mapping accounts to their involved assets
  • The format and structure of the returned map
  • Example usage scenarios

Let's verify if other similar methods in the codebase have descriptive documentation:

Comment on lines +117 to +127
accounts := Map(accounts, func(from *ledger.Account) Account {
return Account{
Address: from.Address,
Metadata: from.Metadata,
FirstUsage: from.FirstUsage.Time,
InsertionDate: from.InsertionDate.Time,
UpdatedAt: from.UpdatedAt.Time,
Volumes: from.Volumes,
EffectiveVolumes: from.EffectiveVolumes,
}
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

⚠️ Potential issue

Remove .Time access from time fields in account mapping

The time fields in the source ledger.Account struct are already of type time.Time. Accessing .Time on these fields will cause compilation errors. Apply the suggested changes:

-    FirstUsage:       from.FirstUsage.Time,
-    InsertionDate:    from.InsertionDate.Time,
-    UpdatedAt:        from.UpdatedAt.Time,
+    FirstUsage:       from.FirstUsage,
+    InsertionDate:    from.InsertionDate,
+    UpdatedAt:        from.UpdatedAt,
🔗 Analysis chain

Confirm the proper handling of time fields in account mapping.

In the mapping function (lines 117-127), the code accesses the .Time field from from.FirstUsage, from.InsertionDate, and from.UpdatedAt. If these fields are already of type time.Time, accessing .Time may cause a compilation error. Please verify that from.FirstUsage, from.InsertionDate, and from.UpdatedAt are of a type that contains a Time field (e.g., pgtype.Timestamp or a custom type with a Time field).

Apply this diff if the fields are of type time.Time:

-    FirstUsage:       from.FirstUsage.Time,
-    InsertionDate:    from.InsertionDate.Time,
-    UpdatedAt:        from.UpdatedAt.Time,
+    FirstUsage:       from.FirstUsage,
+    InsertionDate:    from.InsertionDate,
+    UpdatedAt:        from.UpdatedAt,
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for the Account struct definition in the ledger package
ast-grep --pattern 'type Account struct {
  $$$
}'

# Backup search using ripgrep in case the struct is defined differently
rg "type Account struct" -A 15

Length of output: 7060

Comment on lines +123 to +139
var postCommitVolumes ledger.PostCommitVolumes
if tx.PostCommitVolumes != nil {
postCommitVolumes = tx.PostCommitVolumes.Copy()
}
dstMove := &ledger.Move{
Account: posting.Destination,
Amount: (*bunpaginate.BigInt)(posting.Amount),
Asset: posting.Asset,
InsertionDate: tx.InsertedAt,
EffectiveDate: tx.Timestamp,
TransactionID: tx.ID,
}
if postCommitVolumes != nil {
dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
}
moves = append(moves, dstMove)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Prevent potential panic due to missing keys when accessing postCommitVolumes.

In the loop (lines 123-139), the code accesses postCommitVolumes[posting.Destination][posting.Asset] without checking if the keys exist. If a key is missing, this will cause a runtime panic. Please add checks to ensure that the nested map entries exist before accessing them or initialize them if necessary.

Apply this diff to safely access postCommitVolumes:

 if tx.PostCommitVolumes != nil {
-    dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
-    postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
+    if assets, ok := postCommitVolumes[posting.Destination]; ok {
+        if volume, ok := assets[posting.Asset]; ok {
+            dstMove.PostCommitVolumes = pointer.For(volume.Copy())
+        }
+    }
+    postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
 }

Similarly, update the source move to prevent panics:

 if tx.PostCommitVolumes != nil {
-    srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy())
-    postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
+    if assets, ok := postCommitVolumes[posting.Source]; ok {
+        if volume, ok := assets[posting.Asset]; ok {
+            srcMove.PostCommitVolumes = pointer.For(volume.Copy())
+        }
+    }
+    postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var postCommitVolumes ledger.PostCommitVolumes
if tx.PostCommitVolumes != nil {
postCommitVolumes = tx.PostCommitVolumes.Copy()
}
dstMove := &ledger.Move{
Account: posting.Destination,
Amount: (*bunpaginate.BigInt)(posting.Amount),
Asset: posting.Asset,
InsertionDate: tx.InsertedAt,
EffectiveDate: tx.Timestamp,
TransactionID: tx.ID,
}
if postCommitVolumes != nil {
dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
}
moves = append(moves, dstMove)
var postCommitVolumes ledger.PostCommitVolumes
if tx.PostCommitVolumes != nil {
postCommitVolumes = tx.PostCommitVolumes.Copy()
}
dstMove := &ledger.Move{
Account: posting.Destination,
Amount: (*bunpaginate.BigInt)(posting.Amount),
Asset: posting.Asset,
InsertionDate: tx.InsertedAt,
EffectiveDate: tx.Timestamp,
TransactionID: tx.ID,
}
if postCommitVolumes != nil {
if assets, ok := postCommitVolumes[posting.Destination]; ok {
if volume, ok := assets[posting.Asset]; ok {
dstMove.PostCommitVolumes = pointer.For(volume.Copy())
}
}
postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
}
moves = append(moves, dstMove)

Comment on lines +26 to +28
ctx := bundebug.WithDebug(r.Context())

balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use debug context consistently.

The debug context is created but not used in LedgerFromContext. This inconsistency could lead to loss of debug information.

Apply this diff to ensure consistent context usage:

 ctx := bundebug.WithDebug(r.Context())
-balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq)
+balances, err := common.LedgerFromContext(ctx).GetAggregatedBalances(ctx, *rq)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ctx := bundebug.WithDebug(r.Context())
balances, err := common.LedgerFromContext(r.Context()).GetAggregatedBalances(ctx, *rq)
ctx := bundebug.WithDebug(r.Context())
balances, err := common.LedgerFromContext(ctx).GetAggregatedBalances(ctx, *rq)

Comment on lines +30 to +31
// FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix duplicate feature comment.

The comment for FeaturePostCommitVolumes is incorrectly using the same comment text as FeatureIndexTransactionAccounts.

-	// FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction
+	// FeaturePostCommitVolumes is used to define if we want to compute post commit volumes of each transaction
 	FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// FeatureIndexTransactionAccounts is used to define if we want to compute post commit volumes of each transaction
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES"
// FeaturePostCommitVolumes is used to define if we want to compute post commit volumes of each transaction
FeaturePostCommitVolumes = "POST_COMMIT_VOLUMES"

Comment on lines 4 to 6
-- todo: put in a dedicated migration to avoid lock the table for too long
create index "accounts_volumes_ledger_accounts_address_asset_idx"
on "{{.Schema}}"."accounts_volumes" (ledger, accounts_address, asset) include (input, output);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Migration needs restructuring to ensure zero-downtime deployment

The review concerns are valid. Current issues:

  • Migration drops PK and creates index without CONCURRENTLY, risking table locks
  • Similar index already exists from migration 25, potentially causing redundancy
  • Missing rollback strategy in down migration

Recommendations:

  1. Split into separate migrations:
    • First migration: Drop the primary key
    • Second migration: Create the new index CONCURRENTLY
  2. Add corresponding DOWN migrations for rollback
  3. Verify if the existing index from migration 25 should be dropped
🔗 Analysis chain

Consider splitting the migration to minimize downtime.

The TODO comment suggests potential locking issues. Consider:

  1. Creating the index concurrently to minimize table locks
  2. Moving the index creation to a separate migration
  3. Adding a rollback strategy in the down migration

Additionally, verify if the included columns (input, output) are frequently accessed together in queries to justify the increased index size.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for queries using input/output columns
rg -A 5 'accounts_volumes.*input|accounts_volumes.*output'

Length of output: 1185


Script:

#!/bin/bash
# Check if migration 28 drops the old index
cat internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql

Length of output: 410

Comment on lines +466 to +480
// todo: adapt
//require.Equalf(t, ledger.PostCommitVolumes{
// "world": {
// "USD": {
// Input: big.NewInt(0),
// Output: big.NewInt(int64((i + 1) * 100)),
// },
// },
// "bank": {
// "USD": {
// Input: big.NewInt(int64((i + 1) * 100)),
// Output: big.NewInt(0),
// },
// },
//}, txs[i].PostCommitVolumes, "checking tx %d", i)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Adapt the commented test assertions.

The commented assertions for post-commit volumes need to be updated. This could lead to:

  1. Missing validation of critical functionality
  2. Potential bugs in volume calculations going undetected

Please update the test assertions to match the new volume calculation logic or add a tracking issue if this is a known work in progress.

Comment on lines +719 to +732
err := store.UpdateVolumes(ctx, ledger.AccountsVolumes{
Account: "world",
Asset: "USD/2",
Input: big.NewInt(0),
Output: big.NewInt(100),
})
require.NoError(t, err)
require.Equal(t, ledger.PostCommitVolumes{
"world": {
"USD/2": ledger.NewVolumesInt64(0, 100),
},
}, volumes)

volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{
Account: "world",
Asset: "USD/2",
Input: big.NewInt(50),
Output: big.NewInt(0),
})
require.NoError(t, err)
require.Equal(t, ledger.PostCommitVolumes{
"world": {
"USD/2": ledger.NewVolumesInt64(50, 100),
},
}, volumes)

volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{
Account: "world",
Asset: "USD/2",
Input: big.NewInt(50),
Output: big.NewInt(50),
})
require.NoError(t, err)
require.Equal(t, ledger.PostCommitVolumes{
"world": {
"USD/2": ledger.NewVolumesInt64(100, 150),
},
}, volumes)
// todo
//require.Equal(t, ledger.PostCommitVolumes{
// "world": {
// "USD/2": ledger.NewVolumesInt64(0, 100),
// },
//}, volumes)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Restore volume validation in tests.

The test is missing validation for the returned volumes after the update operation.

Consider either:

  1. Updating the assertions to match the new volume calculation logic
  2. Adding new assertions that validate the current behavior
  3. Adding a comment explaining why the validation is temporarily disabled

Comment on lines +733 to +757
//volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{
// Account: "world",
// Asset: "USD/2",
// Input: big.NewInt(50),
// Output: big.NewInt(0),
//})
//require.NoError(t, err)
//require.Equal(t, ledger.PostCommitVolumes{
// "world": {
// "USD/2": ledger.NewVolumesInt64(50, 100),
// },
//}, volumes)

//volumes, err = store.UpdateVolumes(ctx, ledger.AccountsVolumes{
// Account: "world",
// Asset: "USD/2",
// Input: big.NewInt(50),
// Output: big.NewInt(50),
//})
//require.NoError(t, err)
//require.Equal(t, ledger.PostCommitVolumes{
// "world": {
// "USD/2": ledger.NewVolumesInt64(100, 150),
// },
//}, volumes)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update or remove commented test cases.

Multiple test cases for volume updates are commented out. This reduces test coverage for:

  1. Volume input/output calculations
  2. Sequential volume updates
  3. Edge cases in volume handling

Please either:

  1. Update these test cases to match the new implementation
  2. Add new test cases that cover the same scenarios
  3. Remove the commented code if it's no longer relevant

@gfyrag gfyrag force-pushed the feat/performance-accounts-send-only branch from 8df9f20 to 6fab2f8 Compare January 20, 2025 15:01
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
internal/storage/ledger/accounts.go (1)

121-123: ⚠️ Potential issue

Remove .Time access from time fields.

This is a duplicate of a previous review comment. The time fields are already of type time.Time, accessing .Time will cause compilation errors.

Apply this diff:

-    FirstUsage:       from.FirstUsage.Time,
-    InsertionDate:    from.InsertionDate.Time,
-    UpdatedAt:        from.UpdatedAt.Time,
+    FirstUsage:       from.FirstUsage,
+    InsertionDate:    from.InsertionDate,
+    UpdatedAt:        from.UpdatedAt,
🧹 Nitpick comments (3)
internal/storage/ledger/accounts.go (1)

129-134: Consider using prepared statements for better performance.

The current implementation builds conditions and arguments in a loop. Using prepared statements would be more efficient for multiple accounts.

internal/storage/ledger/resource_volumes.go (1)

83-87: Consider adding an ORDER BY clause for deterministic array_agg results.

When using array_agg with [1] to get the first element, the results might be non-deterministic without an ORDER BY clause.

Apply this diff:

-    selectVolumes = selectVolumes.ColumnExpr("(array_agg(account_array))[1] as account_array")
+    selectVolumes = selectVolumes.ColumnExpr("(array_agg(account_array ORDER BY address))[1] as account_array")
-    selectVolumes = selectVolumes.ColumnExpr("(array_agg(metadata))[1] as metadata")
+    selectVolumes = selectVolumes.ColumnExpr("(array_agg(metadata ORDER BY address))[1] as metadata")
internal/storage/ledger/volumes.go (1)

12-17: Fix inconsistent metric name in tracing.

The metric name "UpdateBalances" doesn't match the method name "UpdateVolumes", which could lead to confusion in monitoring and debugging.

-		"UpdateBalances",
+		"UpdateVolumes",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8df9f20 and 6fab2f8.

⛔ Files ignored due to path filters (6)
  • .github/workflows/benchmark-comparison.yml is excluded by !**/*.yml
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml is excluded by !**/*.yaml
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
📒 Files selected for processing (23)
  • internal/README.md (99 hunks)
  • internal/api/v2/controllers_balances.go (2 hunks)
  • internal/controller/ledger/controller_default.go (2 hunks)
  • internal/ledger_test.go (1 hunks)
  • internal/posting.go (2 hunks)
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1 hunks)
  • internal/storage/ledger/accounts.go (2 hunks)
  • internal/storage/ledger/accounts_test.go (9 hunks)
  • internal/storage/ledger/balances.go (3 hunks)
  • internal/storage/ledger/balances_test.go (2 hunks)
  • internal/storage/ledger/debug.go (1 hunks)
  • internal/storage/ledger/main_test.go (1 hunks)
  • internal/storage/ledger/moves_test.go (2 hunks)
  • internal/storage/ledger/resource_accounts.go (2 hunks)
  • internal/storage/ledger/resource_volumes.go (2 hunks)
  • internal/storage/ledger/transactions.go (3 hunks)
  • internal/storage/ledger/transactions_test.go (1 hunks)
  • internal/storage/ledger/volumes.go (2 hunks)
  • internal/storage/ledger/volumes_test.go (2 hunks)
  • pkg/features/features.go (4 hunks)
  • test/e2e/api_accounts_metadata_test.go (1 hunks)
  • test/e2e/app_lifecycle_test.go (1 hunks)
  • test/stress/stress_test.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
  • test/e2e/api_accounts_metadata_test.go
  • internal/ledger_test.go
  • internal/api/v2/controllers_balances.go
  • internal/storage/ledger/debug.go
  • internal/storage/ledger/transactions_test.go
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
  • pkg/features/features.go
  • internal/README.md
  • test/e2e/app_lifecycle_test.go
  • internal/storage/ledger/main_test.go
  • internal/storage/ledger/volumes_test.go
  • test/stress/stress_test.go
  • internal/posting.go
  • internal/controller/ledger/controller_default.go
  • internal/storage/ledger/balances.go
  • internal/storage/ledger/balances_test.go
  • internal/storage/ledger/transactions.go
  • internal/storage/ledger/accounts_test.go
  • internal/storage/ledger/resource_accounts.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: BenchmarkCompare
🔇 Additional comments (4)
internal/storage/ledger/moves_test.go (1)

32-44: LGTM! Well-structured test initialization.

The account initialization with timestamps and metadata is properly implemented, ensuring consistent state for testing.

internal/storage/ledger/accounts.go (1)

136-162: LGTM! Well-structured CTE query.

The query construction using CTEs is well-organized and handles both updates and inserts efficiently.

internal/storage/ledger/resource_volumes.go (1)

64-71: LGTM! Efficient volume aggregation.

The use of sum aggregations with proper grouping improves query performance by reducing the number of rows processed.

internal/storage/ledger/volumes.go (1)

11-11: Verify the impact of removing PostCommitVolumes from return type.

This is a breaking change that might affect API consumers who rely on the returned volumes data.

Let's check for potential impacts:

✅ Verification successful

The removal of PostCommitVolumes return type is safe and intentional

This change is part of a planned migration controlled by the POST_COMMIT_VOLUMES feature flag. No code is directly consuming the returned volumes from UpdateVolumes, and volume tracking functionality remains intact in the system.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for direct usage of UpdateVolumes return values
ast-grep --pattern 'volumes, err := $_.UpdateVolumes($$$)'

# Search for any references to PostCommitVolumes type
rg "PostCommitVolumes"

Length of output: 11998

Comment on lines 36 to 50
return postgres.ResolveError(err)
}

ret := ledger.PostCommitVolumes{}
for _, volumes := range accountVolumes {
if _, ok := ret[volumes.Account]; !ok {
ret[volumes.Account] = map[string]ledger.Volumes{}
}
ret[volumes.Account][volumes.Asset] = ledger.Volumes{
Input: volumes.Input,
Output: volumes.Output,
}
}

return ret, err
return err
},
)
)))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix redundant error return.

The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.

 			if err != nil {
-				return postgres.ResolveError(err)
+				err = postgres.ResolveError(err)
 			}
-			return err
+			return nil
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return postgres.ResolveError(err)
}
ret := ledger.PostCommitVolumes{}
for _, volumes := range accountVolumes {
if _, ok := ret[volumes.Account]; !ok {
ret[volumes.Account] = map[string]ledger.Volumes{}
}
ret[volumes.Account][volumes.Asset] = ledger.Volumes{
Input: volumes.Input,
Output: volumes.Output,
}
}
return ret, err
return err
},
)
)))
if err != nil {
err = postgres.ResolveError(err)
}
return nil
},
)))

@gfyrag gfyrag force-pushed the feat/performance-accounts-send-only branch from 6fab2f8 to a59a280 Compare January 20, 2025 16:36
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
internal/storage/ledger/volumes.go (1)

36-39: ⚠️ Potential issue

Fix error handling inconsistency.

The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.

Apply this diff to fix the error handling:

 			if err != nil {
-				return postgres.ResolveError(err)
+				err = postgres.ResolveError(err)
 			}
-			return err
+			return nil
🧰 Tools
🪛 GitHub Check: codecov/patch

[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests

internal/storage/ledger/accounts.go (1)

121-123: ⚠️ Potential issue

Remove .Time access from time fields.

The time fields in the source ledger.Account struct are already of type time.Time.

Apply this diff:

-    FirstUsage:       from.FirstUsage.Time,
-    InsertionDate:    from.InsertionDate.Time,
-    UpdatedAt:        from.UpdatedAt.Time,
+    FirstUsage:       from.FirstUsage,
+    InsertionDate:    from.InsertionDate,
+    UpdatedAt:        from.UpdatedAt,
internal/storage/ledger/transactions.go (2)

123-139: ⚠️ Potential issue

Prevent potential panic when accessing postCommitVolumes.

The code still has potential panic issues when accessing nested maps without checking if the keys exist.

Apply this diff to safely access postCommitVolumes:

-if postCommitVolumes != nil {
-    dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
-    postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
+if postCommitVolumes != nil {
+    if assets, ok := postCommitVolumes[posting.Destination]; ok {
+        if volume, ok := assets[posting.Asset]; ok {
+            dstMove.PostCommitVolumes = pointer.For(volume.Copy())
+        }
+    }
+    postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
+}

141-154: ⚠️ Potential issue

Prevent potential panic when accessing postCommitVolumes for source moves.

Similar to the destination moves, the source moves code has potential panic issues when accessing nested maps without checking if the keys exist.

Apply this diff to safely access postCommitVolumes:

-if postCommitVolumes != nil {
-    srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy())
-    postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
+if postCommitVolumes != nil {
+    if assets, ok := postCommitVolumes[posting.Source]; ok {
+        if volume, ok := assets[posting.Asset]; ok {
+            srcMove.PostCommitVolumes = pointer.For(volume.Copy())
+        }
+    }
+    postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
+}
🧹 Nitpick comments (4)
internal/storage/ledger/resource_accounts.go (1)

173-174: Consider database portability for volume calculations.

The composite type cast ::volumes is PostgreSQL-specific. Consider adding a comment explaining this dependency or abstracting it for better database portability.

internal/storage/ledger/transactions.go (1)

55-96: Consider using prepared statements for better security and performance.

The SQL query construction is well-structured but could benefit from using prepared statements to prevent SQL injection and improve query plan caching.

Apply this diff to use prepared statements:

-    selectVolumes := store.db.NewSelect().
+    selectVolumes := store.db.NewRaw(`
+        SELECT asset, sum(input) as input, sum(output) as output, accounts_address
+        FROM ? as volumes
+        WHERE (?) OR (?)
+        GROUP BY accounts_address, asset
+        ORDER BY accounts_address, asset
+    `, store.GetPrefixedRelationName("accounts_volumes"), conditions, args...)
🧰 Tools
🪛 GitHub Check: codecov/patch

[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by tests

internal/storage/ledger/accounts_test.go (2)

154-159: Remove or guard debug statements.

Debug statements should not be committed to the test code. Consider either:

  1. Removing these debug statements, or
  2. Guarding them behind a debug flag/environment variable.
-        store.DumpTables(ctx, "transactions")
-        store.DumpQuery(ctx, store.GetDB().NewSelect().
-            TableExpr(store.GetPrefixedRelationName("moves")).
-            Where("ledger = ?", store.GetLedger().Name).
-            Where("accounts_address = 'account:1'").
-            Order("seq desc"))
+        if os.Getenv("DEBUG") == "true" {
+            store.DumpTables(ctx, "transactions")
+            store.DumpQuery(ctx, store.GetDB().NewSelect().
+                TableExpr(store.GetPrefixedRelationName("moves")).
+                Where("ledger = ?", store.GetLedger().Name).
+                Where("accounts_address = 'account:1'").
+                Order("seq desc"))
+        }

281-284: Remove or guard debug statements.

Similar to the previous debug statements, these should not be committed to the test code.

-        if len(ret.Data) != 2 {
-            fmt.Println("#############")
-            store.DumpTables(ctx, "transactions", "accounts")
-        }
+        if os.Getenv("DEBUG") == "true" && len(ret.Data) != 2 {
+            store.DumpTables(ctx, "transactions", "accounts")
+        }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6fab2f8 and a59a280.

⛔ Files ignored due to path filters (6)
  • .github/workflows/benchmark-comparison.yml is excluded by !**/*.yml
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml is excluded by !**/*.yaml
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
📒 Files selected for processing (23)
  • internal/README.md (99 hunks)
  • internal/api/v2/controllers_balances.go (2 hunks)
  • internal/controller/ledger/controller_default.go (2 hunks)
  • internal/ledger_test.go (1 hunks)
  • internal/posting.go (2 hunks)
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1 hunks)
  • internal/storage/ledger/accounts.go (2 hunks)
  • internal/storage/ledger/accounts_test.go (9 hunks)
  • internal/storage/ledger/balances.go (3 hunks)
  • internal/storage/ledger/balances_test.go (2 hunks)
  • internal/storage/ledger/debug.go (1 hunks)
  • internal/storage/ledger/main_test.go (1 hunks)
  • internal/storage/ledger/moves_test.go (2 hunks)
  • internal/storage/ledger/resource_accounts.go (2 hunks)
  • internal/storage/ledger/resource_volumes.go (2 hunks)
  • internal/storage/ledger/transactions.go (3 hunks)
  • internal/storage/ledger/transactions_test.go (1 hunks)
  • internal/storage/ledger/volumes.go (2 hunks)
  • internal/storage/ledger/volumes_test.go (2 hunks)
  • pkg/features/features.go (4 hunks)
  • test/e2e/api_accounts_metadata_test.go (1 hunks)
  • test/e2e/app_lifecycle_test.go (1 hunks)
  • test/stress/stress_test.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
  • internal/ledger_test.go
  • internal/storage/ledger/debug.go
  • internal/api/v2/controllers_balances.go
  • internal/storage/ledger/main_test.go
  • test/e2e/api_accounts_metadata_test.go
  • internal/storage/ledger/transactions_test.go
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
  • pkg/features/features.go
  • internal/README.md
  • test/stress/stress_test.go
  • internal/posting.go
  • internal/storage/ledger/volumes_test.go
  • test/e2e/app_lifecycle_test.go
  • internal/storage/ledger/balances_test.go
  • internal/storage/ledger/resource_volumes.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
internal/storage/ledger/balances.go

[warning] 76-77: internal/storage/ledger/balances.go#L76-L77
Added lines #L76 - L77 were not covered by tests

internal/storage/ledger/volumes.go

[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests

internal/storage/ledger/transactions.go

[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by tests

internal/storage/ledger/accounts.go

[warning] 174-174: internal/storage/ledger/accounts.go#L174
Added line #L174 was not covered by tests

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: BenchmarkCompare
🔇 Additional comments (9)
internal/storage/ledger/balances.go (1)

28-29: Great comment about lock ordering!

The comment explaining the importance of consistent lock ordering for deadlock prevention is very helpful.

internal/storage/ledger/moves_test.go (1)

32-44: Well-structured test with comprehensive coverage!

The test effectively:

  1. Uses clear timestamps for move ordering
  2. Verifies post-commit volumes after each move
  3. Includes metadata handling
  4. Tests high concurrency scenarios for deadlock detection
internal/storage/ledger/accounts.go (1)

136-162: Well-structured CTE query for account upserts.

The query effectively:

  1. Creates a temporary table for accounts to upsert
  2. Identifies existing accounts needing updates
  3. Identifies non-existing accounts
  4. Combines results using UNION ALL
internal/storage/ledger/transactions.go (1)

40-49: LGTM! Good error handling and feature flag check.

The code correctly checks the feature flag and handles errors from the volume locking mechanism.

internal/storage/ledger/accounts_test.go (4)

333-336: LGTM! Improved timestamp handling in tests.

The changes properly initialize and verify the timestamp fields (FirstUsage, InsertionDate, UpdatedAt) in the test assertions.

Also applies to: 351-353


477-484: LGTM! Proper timestamp initialization.

The changes correctly initialize all timestamp fields (FirstUsage, InsertionDate, UpdatedAt) at the beginning of the test to ensure consistency.

Also applies to: 488-492


508-515: LGTM! Good test coverage for future timestamp handling.

The test properly verifies that account updates with future timestamps don't modify existing records, maintaining data integrity.

Also applies to: 519-527


529-550: LGTM! Good test coverage for past timestamp handling.

The test properly verifies that account updates with past timestamps correctly modify FirstUsage while maintaining the original InsertionDate and updating UpdatedAt.

internal/controller/ledger/controller_default.go (1)

454-465: LGTM! Improved metadata and timestamp handling.

The changes:

  1. Prevent nil pointer dereference by initializing empty metadata
  2. Ensure consistent timestamp handling by properly setting FirstUsage, InsertionDate, and UpdatedAt fields

Comment on lines +75 to +86
if err := store.lockVolumes(ctx, query); err != nil {
return nil, postgres.ResolveError(err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Error handling test coverage needed for lockVolumes function

The review comment is valid. Current tests only cover successful operations, with no coverage for error scenarios in the lockVolumes function. Given that this function is used in critical paths (transactions and balances), error handling test coverage should be added.

Suggested test locations:

  • internal/storage/ledger/balances_test.go
  • internal/storage/ledger/transactions_test.go
🔗 Analysis chain

Add test coverage for error handling.

The error handling path in lockVolumes is not covered by tests.

Run this script to verify test coverage:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check test coverage for lockVolumes error handling
rg -A 5 "TestLockVolumes|lockVolumes.*error" test_*.go

Length of output: 108


Script:

#!/bin/bash
# First, find all test files in the ledger directory
echo "=== Test files in ledger directory ==="
fd -t f "test.*\.go$" internal/storage/ledger/

# Then search for lockVolumes related tests and error handling
echo -e "\n=== LockVolumes test coverage ==="
rg -A 5 "TestLockVolumes|lockVolumes.*error" internal/storage/ledger/

Length of output: 1093


Script:

#!/bin/bash
# Find where lockVolumes is called
echo "=== lockVolumes usage ==="
rg -p "lockVolumes" internal/storage/ledger/

# Search for related balance operation tests
echo -e "\n=== Balance operation tests ==="
rg -p "func Test.*Balance" internal/storage/ledger/balances_test.go

Length of output: 652


Script:

#!/bin/bash
# Check the content of balance-related tests
echo "=== Balance test implementations ==="
rg -A 20 "^func TestBalances" internal/storage/ledger/balances_test.go

# Check for any error handling in tests
echo -e "\n=== Error handling in tests ==="
rg "Error|err.*!= nil" internal/storage/ledger/balances_test.go

Length of output: 2203

🧰 Tools
🪛 GitHub Check: codecov/patch

[warning] 76-77: internal/storage/ledger/balances.go#L76-L77
Added lines #L76 - L77 were not covered by tests

Comment on lines +91 to +93
if err != nil {
return fmt.Errorf("failed to aggregate post commit volumes: %w", err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Add test coverage for error handling in finalAggregate.Scan

The codebase has good test coverage for successful scenarios, but lacks specific test cases for error handling of the finalAggregate.Scan operation. Consider adding test cases for:

  • Database scan errors
  • Malformed data scenarios
  • Context cancellation
🔗 Analysis chain

Add test coverage for error handling.

The error handling for the finalAggregate.Scan operation lacks test coverage.

Run the following script to verify test coverage:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check test coverage for error handling.

# Test: Search for test cases covering the error handling.
rg -A 5 "failed to aggregate post commit volumes" internal/storage/ledger/*_test.go

Length of output: 330


Script:

#!/bin/bash
# Get the implementation context
ast-grep --pattern 'finalAggregate.Scan($$$)' internal/storage/ledger/transactions.go

Length of output: 87


Script:

#!/bin/bash
# Get more context around the error message and related test patterns
rg -B 5 -A 5 "failed to aggregate post commit volumes" internal/storage/ledger/
rg "Scan\(" internal/storage/ledger/transactions_test.go

Length of output: 917


Script:

#!/bin/bash
# Look for test cases that might be testing this functionality
rg -B 3 -A 3 "PostCommitVolumes" internal/storage/ledger/transactions_test.go
rg -B 3 -A 3 "finalAggregate" internal/storage/ledger/transactions_test.go

Length of output: 2864

🧰 Tools
🪛 GitHub Check: codecov/patch

[warning] 92-93: internal/storage/ledger/transactions.go#L92-L93
Added lines #L92 - L93 were not covered by tests

@gfyrag gfyrag force-pushed the feat/performance-accounts-send-only branch from a59a280 to 46a0e18 Compare January 21, 2025 08:31
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
internal/storage/ledger/volumes.go (1)

36-39: ⚠️ Potential issue

Fix redundant error return.

The error is resolved on line 36 but then the unresolved error is returned on line 39, which could lose the resolution context.

Apply this diff to fix the error handling:

 			if err != nil {
-				return postgres.ResolveError(err)
+				err = postgres.ResolveError(err)
 			}
-			return err
+			return nil
🧰 Tools
🪛 GitHub Check: codecov/patch

[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests

internal/storage/ledger/transactions.go (2)

123-139: ⚠️ Potential issue

Fix potential panic when accessing post-commit volumes.

The code accesses nested maps without checking if the keys exist, which could cause a runtime panic.

Apply this diff to safely access post-commit volumes:

-if postCommitVolumes != nil {
-  dstMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Destination][posting.Asset].Copy())
-  postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
-}
+if postCommitVolumes != nil {
+  if assets, ok := postCommitVolumes[posting.Destination]; ok {
+    if volume, ok := assets[posting.Asset]; ok {
+      dstMove.PostCommitVolumes = pointer.For(volume.Copy())
+    }
+  }
+  postCommitVolumes.AddInput(posting.Destination, posting.Asset, new(big.Int).Neg(posting.Amount))
+}

141-154: ⚠️ Potential issue

Fix potential panic when accessing post-commit volumes.

The code accesses nested maps without checking if the keys exist, which could cause a runtime panic.

Apply this diff to safely access post-commit volumes:

-if postCommitVolumes != nil {
-  srcMove.PostCommitVolumes = pointer.For(postCommitVolumes[posting.Source][posting.Asset].Copy())
-  postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
-}
+if postCommitVolumes != nil {
+  if assets, ok := postCommitVolumes[posting.Source]; ok {
+    if volume, ok := assets[posting.Asset]; ok {
+      srcMove.PostCommitVolumes = pointer.For(volume.Copy())
+    }
+  }
+  postCommitVolumes.AddOutput(posting.Source, posting.Asset, new(big.Int).Neg(posting.Amount))
+}
🧹 Nitpick comments (1)
internal/storage/ledger/transactions.go (1)

55-96: Consider using query builder methods for SQL conditions.

The current implementation uses string concatenation for SQL conditions, which could be error-prone and harder to maintain. Consider using the query builder's methods for constructing conditions.

Apply this diff to improve the query construction:

-conditions := make([]string, 0)
-args := make([]any, 0)
-for account, assets := range involvedVolumes {
-  for _, asset := range assets {
-    conditions = append(conditions, "ledger = ? and accounts_address = ? and asset = ?")
-    args = append(args, store.ledger.Name, account, asset)
-  }
-}
-
-selectVolumes := store.db.NewSelect().
-  // ...
-  Where("("+strings.Join(conditions, ") OR (")+")", args...)
+selectVolumes := store.db.NewSelect()
+for account, assets := range involvedVolumes {
+  for _, asset := range assets {
+    selectVolumes = selectVolumes.WhereGroup(" OR ", func(q *bun.SelectQuery) *bun.SelectQuery {
+      return q.Where("ledger = ? AND accounts_address = ? AND asset = ?",
+        store.ledger.Name, account, asset)
+    })
+  }
+}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a59a280 and 46a0e18.

⛔ Files ignored due to path filters (6)
  • .github/workflows/benchmark-comparison.yml is excluded by !**/*.yml
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/notes.yaml is excluded by !**/*.yaml
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
📒 Files selected for processing (23)
  • internal/README.md (99 hunks)
  • internal/api/v2/controllers_balances.go (2 hunks)
  • internal/controller/ledger/controller_default.go (2 hunks)
  • internal/ledger_test.go (1 hunks)
  • internal/posting.go (2 hunks)
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql (1 hunks)
  • internal/storage/ledger/accounts.go (2 hunks)
  • internal/storage/ledger/accounts_test.go (9 hunks)
  • internal/storage/ledger/balances.go (3 hunks)
  • internal/storage/ledger/balances_test.go (2 hunks)
  • internal/storage/ledger/debug.go (1 hunks)
  • internal/storage/ledger/main_test.go (1 hunks)
  • internal/storage/ledger/moves_test.go (2 hunks)
  • internal/storage/ledger/resource_accounts.go (2 hunks)
  • internal/storage/ledger/resource_volumes.go (2 hunks)
  • internal/storage/ledger/transactions.go (3 hunks)
  • internal/storage/ledger/transactions_test.go (1 hunks)
  • internal/storage/ledger/volumes.go (2 hunks)
  • internal/storage/ledger/volumes_test.go (2 hunks)
  • pkg/features/features.go (4 hunks)
  • test/e2e/api_accounts_metadata_test.go (1 hunks)
  • test/e2e/app_lifecycle_test.go (1 hunks)
  • test/stress/stress_test.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
  • test/e2e/api_accounts_metadata_test.go
  • internal/ledger_test.go
  • internal/api/v2/controllers_balances.go
  • internal/storage/bucket/migrations/28-remove-accounts-volumes-unique-index/up.sql
  • internal/README.md
  • pkg/features/features.go
  • internal/storage/ledger/transactions_test.go
  • internal/storage/ledger/debug.go
  • internal/storage/ledger/volumes_test.go
  • test/stress/stress_test.go
  • internal/controller/ledger/controller_default.go
  • internal/posting.go
  • internal/storage/ledger/main_test.go
  • test/e2e/app_lifecycle_test.go
  • internal/storage/ledger/balances_test.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
internal/storage/ledger/volumes.go

[warning] 36-36: internal/storage/ledger/volumes.go#L36
Added line #L36 was not covered by tests

internal/storage/ledger/accounts.go

[warning] 174-174: internal/storage/ledger/accounts.go#L174
Added line #L174 was not covered by tests

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: BenchmarkCompare
🔇 Additional comments (16)
internal/storage/ledger/balances.go (2)

19-37: LGTM! Well-designed locking mechanism.

The lockVolumes function implementation is robust:

  • Generates consistent lock keys to prevent deadlocks
  • Uses sorted keys to ensure a consistent locking order
  • Properly handles errors

75-77: LGTM! Good integration of volume locking.

The GetBalances function now properly locks volumes before querying to prevent race conditions.

internal/storage/ledger/moves_test.go (1)

32-44: LGTM! Proper initialization of Account fields.

The Account initialization correctly includes all required fields with appropriate timestamps and metadata.

internal/storage/ledger/accounts.go (3)

105-115: LGTM! Well-structured Account definition.

The Account struct is well-defined with appropriate field tags and types.


129-170: LGTM! Well-implemented upsert logic.

The CTE-based upsert implementation is robust:

  • Properly handles existing and new accounts
  • Correctly merges metadata
  • Maintains first_usage integrity

121-123: ⚠️ Potential issue

Remove .Time access from time fields.

The time fields in the source ledger.Account struct are already of type time.Time. Accessing .Time on these fields will cause compilation errors.

Apply this diff:

-					FirstUsage:       from.FirstUsage.Time,
-					InsertionDate:    from.InsertionDate.Time,
-					UpdatedAt:        from.UpdatedAt.Time,
+					FirstUsage:       from.FirstUsage,
+					InsertionDate:    from.InsertionDate,
+					UpdatedAt:        from.UpdatedAt,

Likely invalid or redundant comment.

internal/storage/ledger/resource_accounts.go (2)

110-111: LGTM! Improved balance calculation.

The balance calculation now correctly uses aggregate functions with proper grouping.


173-174: LGTM! Improved volumes calculation.

The volumes calculation now correctly uses aggregate functions with proper grouping.

internal/storage/ledger/resource_volumes.go (3)

64-67: LGTM! Performance improvement using aggregation.

The changes improve query performance by using aggregate functions and grouping, reducing the number of rows returned from the database.

Also applies to: 71-71


83-83: LGTM! Performance improvement using array aggregation.

The changes improve query performance by using array_agg to aggregate account arrays and selecting only the first element.


87-87: LGTM! Performance improvement using array aggregation.

The changes improve query performance by using array_agg to aggregate metadata and selecting only the first element.

internal/storage/ledger/transactions.go (1)

40-49: LGTM! Added debugging context and volume locking.

The changes improve debugging capabilities and prevent race conditions by adding debug context and volume locking for post-commit volumes.

internal/storage/ledger/accounts_test.go (4)

154-160: LGTM! Added debug statements for test failures.

The changes improve test debugging capabilities by adding DumpTables and DumpQuery calls.


281-284: LGTM! Added debug statements for test failures.

The changes improve test debugging capabilities by adding DumpTables call when the test fails.


477-484: LGTM! Improved test data initialization.

The changes improve test consistency by initializing timestamps at the beginning and using the same timestamp for all fields.


508-550: LGTM! Added comprehensive test cases for account upsert.

The changes improve test coverage by adding test cases for:

  • Upsert with no modification
  • Upsert with modification
  • Verification of account state after each operation

@gfyrag gfyrag force-pushed the feat/performance-accounts-send-only branch from 46a0e18 to cf55177 Compare January 22, 2025 14:04
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
internal/storage/ledger/accounts.go (1)

121-123: ⚠️ Potential issue

Remove .Time access from time fields in account mapping.

The time fields in the source ledger.Account struct are already of type time.Time. Accessing .Time on these fields will cause compilation errors.

Apply this fix:

-    FirstUsage:       from.FirstUsage.Time,
-    InsertionDate:    from.InsertionDate.Time,
-    UpdatedAt:        from.UpdatedAt.Time,
+    FirstUsage:       from.FirstUsage,
+    InsertionDate:    from.InsertionDate,
+    UpdatedAt:        from.UpdatedAt,
🧹 Nitpick comments (3)
internal/storage/ledger/moves_test.go (1)

32-32: Make test deterministic by using fixed timestamps.

Using time.Now() in tests can make them non-deterministic. Consider using fixed timestamps to ensure consistent test behavior.

Example:

-now := time.Now()
+now := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)
internal/storage/ledger/balances.go (1)

28-29: Enhance comment about lock ordering.

While the comment about sorting is helpful, it could be more descriptive about why consistent ordering prevents deadlocks.

Suggested comment:

-// notes(gfyrag): Keep order, it ensures consistent locking order and limit deadlocks
+// Sort lock keys to ensure a consistent global ordering across transactions.
+// This prevents deadlocks by ensuring all transactions acquire locks in the same order,
+// avoiding circular wait conditions where transaction A waits for B while B waits for A.
internal/controller/ledger/controller_default.go (1)

454-465: Consider improvements to timestamp and metadata handling.

The changes look good but could be enhanced:

  1. Consider using a constant empty map for nil metadata to avoid allocations:
var emptyMetadata = make(metadata.Metadata)
  1. Consider accepting custom timestamps through parameters for testing purposes.

Apply this diff to implement the suggestions:

+var emptyMetadata = make(metadata.Metadata)
+
 func (ctrl *DefaultController) saveAccountMetadata(ctx context.Context, store Store, parameters Parameters[SaveAccountMetadata]) (*ledger.SavedMetadata, error) {
 	metadata := parameters.Input.Metadata
 	if metadata == nil {
-		metadata = make(map[string]string)
+		metadata = emptyMetadata
 	}
 
-	now := time.Now()
+	now := parameters.Input.Timestamp
+	if now.IsZero() {
+		now = time.Now()
+	}
 	if err := store.UpsertAccounts(ctx, &ledger.Account{
 		Address:       parameters.Input.Address,
 		Metadata:      metadata,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46a0e18 and cf55177.

⛔ Files ignored due to path filters (7)
  • .github/workflows/benchmark-comparison.yml is excluded by !**/*.yml
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • internal/storage/bucket/migrations/28-create-accounts-volumes-index/notes.yaml is excluded by !**/*.yaml
  • internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/notes.yaml is excluded by !**/*.yaml
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
📒 Files selected for processing (25)
  • internal/README.md (99 hunks)
  • internal/api/v2/controllers_balances.go (2 hunks)
  • internal/controller/ledger/controller_default.go (2 hunks)
  • internal/ledger_test.go (1 hunks)
  • internal/posting.go (2 hunks)
  • internal/storage/bucket/migrations/28-create-accounts-volumes-index/up.sql (1 hunks)
  • internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/up.sql (1 hunks)
  • internal/storage/ledger/accounts.go (2 hunks)
  • internal/storage/ledger/accounts_test.go (7 hunks)
  • internal/storage/ledger/balances.go (3 hunks)
  • internal/storage/ledger/balances_test.go (2 hunks)
  • internal/storage/ledger/debug.go (1 hunks)
  • internal/storage/ledger/main_test.go (1 hunks)
  • internal/storage/ledger/moves_test.go (2 hunks)
  • internal/storage/ledger/resource.go (1 hunks)
  • internal/storage/ledger/resource_accounts.go (2 hunks)
  • internal/storage/ledger/resource_volumes.go (5 hunks)
  • internal/storage/ledger/transactions.go (3 hunks)
  • internal/storage/ledger/transactions_test.go (1 hunks)
  • internal/storage/ledger/volumes.go (1 hunks)
  • internal/storage/ledger/volumes_test.go (2 hunks)
  • pkg/features/features.go (4 hunks)
  • test/e2e/api_accounts_metadata_test.go (1 hunks)
  • test/e2e/app_lifecycle_test.go (1 hunks)
  • test/stress/stress_test.go (2 hunks)
✅ Files skipped from review due to trivial changes (2)
  • internal/storage/ledger/resource.go
  • internal/storage/bucket/migrations/29-drop-accounts-volumes-pk/up.sql
🚧 Files skipped from review as they are similar to previous changes (14)
  • internal/ledger_test.go
  • pkg/features/features.go
  • test/e2e/api_accounts_metadata_test.go
  • internal/README.md
  • internal/posting.go
  • internal/storage/ledger/transactions_test.go
  • test/stress/stress_test.go
  • internal/storage/ledger/balances_test.go
  • internal/storage/ledger/debug.go
  • internal/api/v2/controllers_balances.go
  • internal/storage/ledger/volumes_test.go
  • internal/storage/ledger/transactions.go
  • internal/storage/ledger/main_test.go
  • internal/storage/ledger/resource_volumes.go
👮 Files not reviewed due to content moderation or server errors (2)
  • internal/storage/ledger/accounts_test.go
  • internal/controller/ledger/controller_default.go
🔇 Additional comments (22)
test/e2e/app_lifecycle_test.go (1)

113-113: LGTM! Improved transaction locking strategy.

The change from checking INSERT operations to advisory locks is a good improvement as it:

  1. Provides a more reliable way to track transaction processing
  2. Reduces table-level locking contention
  3. Aligns with PostgreSQL best practices for concurrency control

Let's verify the advisory lock usage in the codebase:

✅ Verification successful

✅ Advisory lock verification successful

The test change correctly reflects the codebase's transaction processing strategy, which uses PostgreSQL advisory locks for concurrency control. This is consistently implemented across the codebase and well-documented in CONTRIBUTING.md.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the advisory lock implementation

# Check for advisory lock usage in the codebase
rg -A 5 'pg_advisory.*lock'

# Check for any remaining direct INSERT operations on accounts
rg -A 5 'INSERT INTO.*accounts'

Length of output: 3552

internal/storage/ledger/resource_accounts.go (2)

110-111: LGTM! Improved balance calculation using aggregates.

The change to use sum aggregates for input and output calculations provides a more accurate way to compute balances, especially when dealing with multiple transactions per asset.


173-174: LGTM! Consider monitoring query performance.

The implementation correctly uses aggregate functions with proper grouping for volume calculations. The approach aligns well with the append-only strategy for balances.

Since the query groups by both accounts_address and asset, consider monitoring query performance with large datasets to ensure optimal execution.

Run the following script to check the query execution plan:

✅ Verification successful

Performance is well-optimized with existing indexes

The codebase already includes appropriate indexes on the accounts_volumes table that cover both the grouping columns (accounts_address, asset) and the aggregated columns (input, output). The query will perform efficiently even with large datasets.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if there are any indexes on accounts_volumes table that could help with the grouping operation

# Look for CREATE INDEX statements in migration files
rg -A 2 'CREATE.*INDEX.*accounts_volumes' --type sql

# Look for index definitions in schema files
rg -A 2 'CREATE.*INDEX.*accounts_volumes' --type go

Length of output: 108


Script:

#!/bin/bash
# Cast a wider net to find relevant table and index definitions

# Look for table creation statements related to volumes
rg -i "create.*table.*volume" --type sql -A 5

# Search for migration files containing volume-related changes
fd -e sql -e go | xargs rg -l "volume"

# Look for any index definitions in the codebase
rg -i "create.*index.*volume" --type sql

Length of output: 4268

internal/storage/bucket/migrations/28-create-accounts-volumes-index/up.sql (1)

1-2: LGTM! Well-designed index creation.

The concurrent index creation with included columns is well structured:

  • Using create index concurrently minimizes impact on production traffic
  • Column order (ledger, accounts_address, asset) aligns with query patterns
  • Including input/output columns optimizes query performance
internal/storage/ledger/volumes.go (1)

40-50: ⚠️ Potential issue

Fix redundant error return.

The error is resolved on line 45 but then the unresolved error is returned on line 48, which could lose the resolution context.

Apply this fix:

 			if err != nil {
-				return postgres.ResolveError(err)
+				err = postgres.ResolveError(err)
 			}
-			return err
+			return nil

Likely invalid or redundant comment.

internal/storage/ledger/balances.go (1)

84-86: Add test coverage for lockVolumes error handling.

The error handling path in lockVolumes is not covered by tests. Given that this function is used in critical paths (transactions and balances), error handling test coverage should be added.

Run this script to verify test coverage:

✅ Verification successful

The review comment is correct - lockVolumes error handling lacks test coverage

The function's error paths are not tested, specifically:

  • PostgreSQL advisory lock acquisition failures
  • Database connection errors during lock operations
  • Transaction rollback behavior when locks fail

This verification confirms that test coverage should be added given the critical nature of this function in transaction processing and balance management.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check test coverage for lockVolumes error handling
rg -A 5 "TestLockVolumes|lockVolumes.*error" test_*.go

Length of output: 108


Script:

#!/bin/bash
# First, find the implementation of lockVolumes
ast-grep --pattern 'func $_ lockVolumes($$$) {
  $$$
}'

# Then check for test files and coverage
cd internal/storage/ledger
rg -A 5 "TestLockVolumes|lockVolumes.*error" .

Length of output: 527


Script:

#!/bin/bash
# Get the complete implementation of lockVolumes
cd internal/storage/ledger
rg -A 15 "^func.*lockVolumes" .

# Look for indirect test coverage through transaction/balance tests
rg -l "Test.*Balance|Test.*Transaction" .

Length of output: 957


Script:

#!/bin/bash
cd internal/storage/ledger

# Check test files for database error simulation or lock-related tests
rg -A 10 "mock.*DB|fake.*DB|pg_advisory_xact_lock|test.*lock" balances_test.go transactions_test.go

# Look for error handling in tests
rg "Error|err.*nil" balances_test.go transactions_test.go

Length of output: 4409

internal/storage/ledger/accounts_test.go (12)

153-153: LGTM!

The added empty line improves readability by separating test cases.


224-224: LGTM!

The added empty line improves readability by separating test cases.


322-325: LGTM! Good improvement in test readability.

Storing the timestamp in a variable makes the test more explicit and maintainable.


Line range hint 466-539: LGTM! Comprehensive test coverage for account timestamp handling.

The test cases effectively verify the timestamp behavior during account upsert operations:

  • Validates that FirstUsage is set to the earliest timestamp
  • Confirms that InsertionDate remains unchanged after initial creation
  • Verifies that UpdatedAt reflects the latest modification

153-153: LGTM!

The added empty line improves test case separation and readability.


224-224: LGTM!

The added empty line improves test case separation and readability.


322-325: LGTM!

The timestamp variable improves code maintainability and makes the test's intent clearer by explicitly showing the historical nature of the transaction.


Line range hint 466-539: Well-structured test cases with comprehensive coverage!

The test cases effectively verify the account upsert behavior:

  1. Initial insert with current timestamp
  2. Upsert with future timestamp (no modification)
  3. Upsert with past timestamp (with modification)

Each case properly validates the account state from the database, ensuring the timestamp fields (FirstUsage, InsertionDate, UpdatedAt) are handled correctly.


153-153: LGTM!

The added empty line improves readability by separating test cases.


224-224: LGTM!

The added empty line improves readability by separating test cases.


322-325: LGTM!

The introduction of tx2Timestamp variable improves code maintainability by storing the timestamp in a reusable variable.


Line range hint 466-539: LGTM!

The test cases provide comprehensive coverage for timestamp handling during account upsert:

  1. Verifies that FirstUsage is not modified when upserting with a future timestamp
  2. Confirms that FirstUsage is updated when upserting with an earlier timestamp
  3. Ensures that InsertionDate remains unchanged while UpdatedAt is modified during upsert
internal/controller/ledger/controller_default.go (4)

294-294: LGTM!

Minor formatting improvement.


454-458: LGTM! Good defensive programming.

The nil check and initialization of the metadata map prevent potential nil pointer dereferences, following Go best practices.


459-465: LGTM! Consistent timestamp handling.

The code captures the current time once and uses it consistently across all timestamp fields (InsertionDate, UpdatedAt, FirstUsage), ensuring temporal consistency in the account metadata.


294-294: LGTM!

The whitespace adjustment improves code formatting.

Comment on lines +454 to +465
metadata := parameters.Input.Metadata
if metadata == nil {
metadata = make(map[string]string)
}

now := time.Now()
if err := store.UpsertAccounts(ctx, &ledger.Account{
Address: parameters.Input.Address,
Metadata: parameters.Input.Metadata,
Address: parameters.Input.Address,
Metadata: metadata,
InsertionDate: now,
UpdatedAt: now,
FirstUsage: now,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider retrieving FirstUsage from transaction history.

Setting FirstUsage to the current time during metadata save might not accurately reflect when the account was first used in a transaction. Consider retrieving the actual first usage timestamp from the transaction history.

Apply this diff to improve FirstUsage handling:

 metadata := parameters.Input.Metadata
 if metadata == nil {
   metadata = make(map[string]string)
 }

 now := time.Now()
+// Get the earliest transaction timestamp for this account
+firstUsage, err := store.Transactions().GetOne(ctx, ledgercontroller.ResourceQuery[any]{
+  Builder: query.Match("postings.source", parameters.Input.Address).
+           Or(query.Match("postings.destination", parameters.Input.Address)),
+  Order:   pointer.For(bunpaginate.Order(bunpaginate.OrderAsc)),
+})
+if err != nil && !errors.Is(err, postgres.ErrNotFound) {
+  return nil, fmt.Errorf("failed to get first usage: %w", err)
+}
+
+firstUsageTime := now
+if firstUsage != nil {
+  firstUsageTime = firstUsage.Timestamp
+}
+
 if err := store.UpsertAccounts(ctx, &ledger.Account{
   Address:       parameters.Input.Address,
   Metadata:      metadata,
   InsertionDate: now,
   UpdatedAt:     now,
-  FirstUsage:    now,
+  FirstUsage:    firstUsageTime,
 }); err != nil {

Committable suggestion skipped: line range outside the PR's diff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant