Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

starknet: add event decoder module #133

Merged
merged 6 commits into from
Jan 17, 2025
Merged

Conversation

fracek
Copy link
Contributor

@fracek fracek commented Jan 17, 2025

This PR adds helpers to decode Starknet events with an easy-to-use function.

Copy link

coderabbitai bot commented Jan 17, 2025

📝 Walkthrough

Walkthrough

This pull request introduces significant enhancements to the Starknet package, focusing on event decoding, ABI parsing, and expanding the module's functionality. The changes include adding new dependencies, implementing robust event and data parsing mechanisms, and introducing comprehensive test suites. Key improvements involve creating flexible parsing utilities for different data types, adding selector generation functions, and enhancing the module's ability to handle complex blockchain data structures.

Changes

File Change Summary
examples/starknet-client/package.json Added viem dependency
examples/starknet-client/src/main.ts Updated Starknet stream URL, added event decoding and formatting logic
packages/starknet/build.config.ts Added ./src/parser.ts as a new entry point
packages/starknet/package.json Added new export for ./parser module, added @scure/starknet and abi-wan-kanabi dependencies
packages/starknet/src/abi.ts Added multiple selector and type parsing functions
packages/starknet/src/access.ts Updated getReceipt and getTransaction function signatures
packages/starknet/src/common.ts Added FieldElement type export
packages/starknet/src/event.ts Implemented decodeEvent functionality with comprehensive error handling
packages/starknet/src/index.ts Added new exports and module declarations
packages/starknet/src/parser.ts Introduced comprehensive parsing utilities for various data types
packages/starknet/tests/block.bench.ts Added benchmarks for BlockFromBytes functionality
packages/starknet/tests/block.test.ts Introduced test suite for BlockFromBytes functionality
packages/starknet/tests/event.test.ts Added test suite for decodeEvent function
packages/starknet/tests/fixtures.ts Added functionality to read hexadecimal data from a file
packages/starknet/tests/fixtures/chainlink-abi.ts Defined ABI for Chainlink's OCR2 aggregator
packages/starknet/tests/fixtures/ekubo-abi.ts Defined ABI for the Ekubo protocol
change/@apibara-starknet-bcb46cf2-9d8c-46cb-b404-7c805eb96097.json Added prerelease version information for @apibara/starknet

Sequence Diagram

sequenceDiagram
    participant Client
    participant EventDecoder
    participant ABIParser
    participant BlockData

    Client->>EventDecoder: decodeEvent(eventArgs)
    EventDecoder->>ABIParser: Validate event in ABI
    ABIParser-->>EventDecoder: Event validation result
    EventDecoder->>ABIParser: Compile event parsers
    ABIParser-->>EventDecoder: Parsed event structure
    EventDecoder->>BlockData: Extract event data
    BlockData-->>EventDecoder: Raw event data
    EventDecoder->>EventDecoder: Decode event members
    EventDecoder-->>Client: Decoded event object
Loading

Possibly related PRs

  • plugin-drizzle: add drizzle plugin and persistence #130: The changes in this PR involve the addition of the drizzleStorage functionality, which is directly related to the new dependency "viem": "^2.22.9" added in the main PR, as both are part of enhancing data persistence mechanisms.
  • example: update cli example #131: This PR updates the CLI example to utilize the @apibara/plugin-drizzle, which is relevant to the main PR's changes involving the drizzleStorage functionality, indicating a connection in the context of implementing and demonstrating new storage capabilities.

Poem

🐰 Parsing blocks with rabbit might,
Selectors dancing in the light,
Events decoded, types so neat,
StarkNet's magic now complete!
Code hops forward with delight! 🚀


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 06ab94e and d8ffc53.

📒 Files selected for processing (1)
  • change/@apibara-starknet-bcb46cf2-9d8c-46cb-b404-7c805eb96097.json (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • change/@apibara-starknet-bcb46cf2-9d8c-46cb-b404-7c805eb96097.json
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (15)
packages/starknet/tests/fixtures.ts (1)

13-15: Consider adding input validation for hex string.

The fromHex function should validate the input string format and handle edge cases.

 function fromHex(hex: string): Uint8Array {
+  if (hex.length % 2 !== 0) {
+    throw new Error("Hex string must have even length");
+  }
   return Uint8Array.from(Buffer.from(hex, "hex"));
 }
packages/starknet/tests/block.bench.ts (2)

9-9: Add error handling to decode benchmark.

The benchmark should catch and report potential decoding errors. Also, consider measuring memory usage for the empty block case.

-const decode = Schema.decodeSync(BlockFromBytes);
+const decode = (input: Uint8Array) => {
+  try {
+    return Schema.decodeSync(BlockFromBytes)(input);
+  } catch (error) {
+    console.error("Decoding failed:", error);
+    throw error;
+  }
+};

 describe("BlockFromBytes - empty block", () => {
   bench("decode", () => {
     decode(emptyBlock);
   });
+  bench.skip("memory usage", () => {
+    const beforeMemory = process.memoryUsage().heapUsed;
+    decode(emptyBlock);
+    const afterMemory = process.memoryUsage().heapUsed;
+    console.log(`Memory used: ${(afterMemory - beforeMemory) / 1024} KB`);
+  });
 });

Also applies to: 12-15


17-25: Consider adding performance thresholds.

The large block benchmarks should include performance expectations and thresholds to catch performance regressions.

 describe("BlockFromBytes - large block", () => {
-  bench("decode", () => {
+  bench("decode", { 
+    warmupIterations: 3,
+    iterations: 10,
+    maxDuration: 5000
+  }, () => {
     decode(largeBlock);
   });

-  bench("protobuf", () => {
+  bench("protobuf", {
+    warmupIterations: 3,
+    iterations: 10,
+    maxDuration: 5000
+  }, () => {
     proto.data.Block.decode(largeBlock);
   });
 });
packages/starknet/src/event.ts (2)

89-91: Enums Decoding Not Implemented

The current implementation throws an error when encountering events with enum types. To enhance functionality and accommodate more complex ABIs, consider implementing support for decoding events that include enum types.


126-132: Simplify Error Handling in Catch Block

In the catch block, both conditions for DecodeEventError and ParseError return the same value when strict is false. You can simplify the logic by combining these conditions.

Apply this diff to simplify the catch block:

      } catch (error) {
-       if (error instanceof DecodeEventError && !strict) {
-         return null as DecodeEventReturn<TAbi, TEventName, TStrict>;
-       }
-       if (error instanceof ParseError && !strict) {
-         return null as DecodeEventReturn<TAbi, TEventName, TStrict>;
-       }
+       if (!strict && (error instanceof DecodeEventError || error instanceof ParseError)) {
+         return null as DecodeEventReturn<TAbi, TEventName, TStrict>;
+       }
        throw error;
      }
packages/starknet/src/index.ts (1)

16-22: Document Module Augmentation for abi-wan-kanabi

The module augmentation adds a Config interface with custom types to abi-wan-kanabi. To improve maintainability and clarity for other developers, consider adding comments or documentation explaining the purpose of this augmentation and how it integrates with abi-wan-kanabi.

packages/starknet/src/abi.ts (1)

37-47: Consider adding type safety for parser mapping.

The PrimitiveTypeParsers mapping looks good, but could benefit from stronger typing.

Consider adding a type definition:

type Parser<T> = (value: unknown) => T;

export const PrimitiveTypeParsers: Record<string, Parser<unknown>> = {
  "core::bool": parseBool,
  // ... rest of the mappings
};
examples/starknet-client/src/main.ts (2)

15-38: Consider extracting ABI to a separate file.

The Transfer event ABI definition could be moved to a dedicated file for better maintainability and reusability.

Consider creating a new file transfer-abi.ts:

import type { Abi } from "@apibara/starknet";

export const transferAbi = [
  {
    kind: "struct",
    name: "Transfer",
    // ... rest of the definition
  },
] as const satisfies Abi;

127-129: Consider enhancing address formatting.

The address formatting utility could be more flexible.

Consider adding configurable prefix/suffix lengths:

function prettyAddress(address: string, prefixLength = 6, suffixLength = 4) {
  return `${address.slice(0, prefixLength)}...${address.slice(-suffixLength)}`;
}
packages/starknet/src/parser.ts (2)

52-55: Consider adding input validation for boolean values.

The parseBool function assumes any non-zero value is true. Consider adding explicit validation for boolean values (0 and 1).

 export function parseBool(data: readonly FieldElement[], offset: number) {
   assertInBounds(data, offset);
-  return { out: BigInt(data[offset]) > 0n, offset: offset + 1 };
+  const value = BigInt(data[offset]);
+  if (value !== 0n && value !== 1n) {
+    throw new ParseError(`Invalid boolean value: ${value}. Expected 0 or 1.`);
+  }
+  return { out: value === 1n, offset: offset + 1 };
 }

135-151: Consider optimizing parseStruct for large structs.

The parseStruct function sorts parsers on every call. Consider memoizing the sorted parsers for better performance.

 export function parseStruct<T extends { [key: string]: unknown }>(
   parsers: { [K in keyof T]: { index: number; parser: Parser<T[K]> } },
 ) {
-  const sortedParsers = Object.entries(parsers).sort(
+  // Memoize sorted parsers
+  const sortedParsers = Object.freeze(Object.entries(parsers).sort(
     (a, b) => a[1].index - b[1].index,
-  );
+  ));
   return (data: readonly FieldElement[], startingOffset: number) => {
     let offset = startingOffset;
     const out: Record<string, unknown> = {};
     for (const [key, { parser }] of sortedParsers) {
       const { out: value, offset: newOffset } = parser(data, offset);
       out[key] = value;
       offset = newOffset;
     }
     return { out, offset };
   };
 }
packages/starknet/src/parser.test.ts (2)

20-83: Add edge case tests for primitive types.

The primitive types test suite should include tests for:

  • Maximum values for each integer type
  • Error cases for out-of-bounds values
  • Invalid hex strings
it("throws error for u8 overflow", () => {
  const data = ["0x100"] as const; // 256 > max u8 (255)
  expect(() => parseU8(data, 0)).toThrow(ParseError);
});

it("throws error for invalid hex", () => {
  const data = ["0xInvalid"] as const;
  expect(() => parseU8(data, 0)).toThrow();
});

85-97: Add more array parser test cases.

The array parser test suite should include:

  • Empty array case
  • Maximum length array case
  • Nested arrays
it("can parse an empty array", () => {
  const data = ["0x0"] as const;
  const { out, offset } = parseArray(parseU8)(data, 0);
  expect(out).toEqual([]);
  expect(offset).toBe(1);
});

it("can parse nested arrays", () => {
  const data = ["0x2", "0x2", "0x1", "0x2", "0x2", "0x3", "0x4"] as const;
  const { out } = parseArray(parseArray(parseU8))(data, 0);
  expect(out).toEqual([[1n, 2n], [3n, 4n]]);
});
packages/starknet/tests/event.test.ts (1)

10-93: Add more error case tests for non-strict mode.

The non-strict mode test suite should include additional error cases:

  • Invalid event name
  • Missing event in ABI
  • Malformed event data
it("returns null for non-existent event name", () => {
  const decoded = decodeEvent({
    abi: ekuboAbi,
    event: { /* ... */ } as const satisfies Event,
    eventName: "NonExistentEvent",
    strict: false,
  });
  expect(decoded).toBeNull();
});

it("returns null for malformed event data", () => {
  const decoded = decodeEvent({
    abi: ekuboAbi,
    event: { data: [] } as const satisfies Event,
    eventName: "ekubo::core::Core::PositionUpdated",
    strict: false,
  });
  expect(decoded).toBeNull();
});
packages/starknet/tests/fixtures/chainlink-abi.ts (1)

3-588: Add JSDoc comments for better documentation.

Consider adding JSDoc comments to describe the purpose and usage of key ABI components:

  • Structs (Round, OracleConfig, etc.)
  • Events (NewTransmission, ConfigSet, etc.)
  • Functions (transmit, latest_transmission_details)

Example:

/**
 * Chainlink OCR2 Aggregator ABI
 * @description Interface for interacting with Chainlink's OCR2 price feeds on StarkNet
 */
export const chainlinkAbi = [
  /**
   * @struct Round
   * @description Represents a single price update round
   * @property {felt252} round_id - Unique identifier for the round
   * @property {u128} answer - The price value reported in this round
   * @property {u64} block_num - Block number when the round was created
   * @property {u64} started_at - Timestamp when the round started
   * @property {u64} updated_at - Timestamp of the last update
   */
  {
    name: "chainlink::ocr2::aggregator::Round",
    // ...
  },
  // ... rest of the ABI
] as const satisfies Abi;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ea71a06 and 06ab94e.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (17)
  • examples/starknet-client/package.json (1 hunks)
  • examples/starknet-client/src/main.ts (5 hunks)
  • packages/starknet/build.config.ts (1 hunks)
  • packages/starknet/package.json (2 hunks)
  • packages/starknet/src/abi.ts (1 hunks)
  • packages/starknet/src/access.ts (2 hunks)
  • packages/starknet/src/common.ts (1 hunks)
  • packages/starknet/src/event.ts (1 hunks)
  • packages/starknet/src/index.ts (2 hunks)
  • packages/starknet/src/parser.test.ts (1 hunks)
  • packages/starknet/src/parser.ts (1 hunks)
  • packages/starknet/tests/block.bench.ts (1 hunks)
  • packages/starknet/tests/block.test.ts (1 hunks)
  • packages/starknet/tests/event.test.ts (1 hunks)
  • packages/starknet/tests/fixtures.ts (1 hunks)
  • packages/starknet/tests/fixtures/chainlink-abi.ts (1 hunks)
  • packages/starknet/tests/fixtures/ekubo-abi.ts (1 hunks)
🧰 Additional context used
🪛 Gitleaks (8.21.2)
packages/starknet/tests/event.test.ts

169-169: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


170-170: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test
🔇 Additional comments (12)
packages/starknet/build.config.ts (1)

4-4: Verify Necessity of Including ./src/parser.ts in Build Entries

Adding ./src/parser.ts to the entries array will compile it as a separate bundle. If the parser functionalities are intended for internal use only, consider exporting the necessary components through ./src/index.ts instead. This approach keeps the bundle size optimized and maintains a cleaner public API.

packages/starknet/src/common.ts (1)

37-37: LGTM! Type export enhances type safety.

The FieldElement type export correctly leverages the existing Schema definition, providing proper type safety when used across modules.

packages/starknet/src/access.ts (2)

6-8: LGTM! Enhanced parameter flexibility.

The updated parameter type allows for both structured ({ receipts: ... }) and unstructured (direct array) inputs, improving API flexibility while maintaining backward compatibility.


21-21: LGTM! Consistent parameter type enhancement.

The transaction parameter type follows the same pattern as receipts, maintaining consistency across the API.

packages/starknet/src/abi.ts (2)

16-35: LGTM! Well-implemented selector generation functions.

The selector functions are well-documented and correctly implement the Starknet selector generation logic:

  • getBigIntSelector: Uses keccak for hashing
  • getSelector: Properly formats the output as a hex string
  • getEventSelector: Correctly handles fully qualified names

49-79: LGTM! Comprehensive type checking utilities.

The type checking and extraction functions are well-implemented:

  • Clear and focused purpose for each function
  • Consistent pattern for type extraction
  • Good coverage of core Starknet types
examples/starknet-client/src/main.ts (1)

95-103: LGTM! Clean event processing implementation.

The event processing logic effectively uses the new decoder module and formats output clearly.

packages/starknet/src/parser.ts (1)

1-27: LGTM! Proper license attribution.

The code properly attributes the original work to Ekubo and includes the MIT license.

packages/starknet/tests/fixtures/ekubo-abi.ts (1)

3-654: Well-structured and comprehensive ABI definition!

The ABI is well-organized with clear type definitions for all components, following proper Cairo/Starknet naming conventions. The use of as const satisfies Abi ensures type safety while maintaining the constant nature of the ABI.

examples/starknet-client/package.json (1)

17-18: LGTM: Appropriate dependency addition.

The addition of viem with version ^2.22.9 is appropriate for enhanced blockchain data handling capabilities.

packages/starknet/package.json (2)

19-23: LGTM: Well-structured parser module export configuration.

The parser module export follows the package's established pattern with proper type definitions and module resolution paths.


47-48: LGTM: Appropriate dependencies for Starknet development.

The addition of @scure/starknet and abi-wan-kanabi with appropriate version ranges will enhance the package's Starknet capabilities.

Comment on lines +7 to +11
function readHexFile(f: string): Uint8Array {
const filename = path.join(import.meta.dirname, f);
const hex = fs.readFileSync(filename, "utf-8");
return fromHex(hex);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for file operations.

The readHexFile function should handle potential file system errors and invalid file content gracefully.

 function readHexFile(f: string): Uint8Array {
   const filename = path.join(import.meta.dirname, f);
-  const hex = fs.readFileSync(filename, "utf-8");
-  return fromHex(hex);
+  try {
+    const hex = fs.readFileSync(filename, "utf-8").trim();
+    if (!hex.match(/^[0-9a-fA-F]*$/)) {
+      throw new Error("Invalid hex content");
+    }
+    return fromHex(hex);
+  } catch (error) {
+    if (error instanceof Error) {
+      throw new Error(`Failed to read hex file: ${error.message}`);
+    }
+    throw error;
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function readHexFile(f: string): Uint8Array {
const filename = path.join(import.meta.dirname, f);
const hex = fs.readFileSync(filename, "utf-8");
return fromHex(hex);
}
function readHexFile(f: string): Uint8Array {
const filename = path.join(import.meta.dirname, f);
try {
const hex = fs.readFileSync(filename, "utf-8").trim();
if (!hex.match(/^[0-9a-fA-F]*$/)) {
throw new Error("Invalid hex content");
}
return fromHex(hex);
} catch (error) {
if (error instanceof Error) {
throw new Error(`Failed to read hex file: ${error.message}`);
}
throw error;
}
}

Comment on lines +10 to +21
describe("BlockFromBytes", () => {
it("decode", () => {
const block = decode(largeBlock);
expect(block?.header).toBeDefined();
expect(block?.events).toHaveLength(919);
expect(block?.transactions).toHaveLength(50);
expect(block?.receipts).toHaveLength(50);
expect(block?.storageDiffs).toHaveLength(21);
expect(block?.contractChanges).toHaveLength(1);
expect(block?.nonceUpdates).toHaveLength(37);
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test coverage with additional test cases.

The current test suite only verifies array lengths. Consider adding:

  1. Error cases for invalid input
  2. Validation of actual block content
  3. Edge cases for different block structures
 describe("BlockFromBytes", () => {
+  it("should handle empty block", () => {
+    const block = decode(emptyBlock);
+    expect(block?.header).toBeDefined();
+    expect(block?.events).toHaveLength(0);
+    expect(block?.transactions).toHaveLength(0);
+  });
+
+  it("should throw on invalid input", () => {
+    expect(() => decode(new Uint8Array([1, 2, 3])))
+      .toThrow();
+  });
+
   it("decode", () => {
     const block = decode(largeBlock);
     expect(block?.header).toBeDefined();
+    // Verify specific header fields
+    expect(block?.header.blockNumber).toBeDefined();
+    expect(block?.header.blockHash).toBeDefined();
+
     expect(block?.events).toHaveLength(919);
+    // Verify first event structure
+    expect(block?.events[0]).toMatchObject({
+      from: expect.any(String),
+      keys: expect.any(Array),
+      data: expect.any(Array),
+    });
+
     expect(block?.transactions).toHaveLength(50);
     expect(block?.receipts).toHaveLength(50);
     expect(block?.storageDiffs).toHaveLength(21);
     expect(block?.contractChanges).toHaveLength(1);
     expect(block?.nonceUpdates).toHaveLength(37);
   });
 });

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +65 to +71
events: [
{
address:
"0x053c91253bc9682c04929ca02ed00b3e423f6710d2ee7e0d5ebb06f3ecf368a8",
keys: [getSelector("Transfer")],
},
],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider making contract address configurable.

The hardcoded contract address in the filter configuration limits reusability.

Consider making it a command argument:

-      address:
-        "0x053c91253bc9682c04929ca02ed00b3e423f6710d2ee7e0d5ebb06f3ecf368a8",
+      address: args.contractAddress,

And add to command args:

contractAddress: {
  type: "string",
  required: true,
  description: "Contract address to monitor for Transfer events",
},

Comment on lines +69 to +75
export function parseU256(data: readonly FieldElement[], offset: number) {
assertInBounds(data, offset + 1);
return {
out: BigInt(data[offset]) + (BigInt(data[offset + 1]) << 128n),
offset: offset + 2,
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add bounds checking for U256 values.

The parseU256 function should validate that the parsed value doesn't exceed the maximum U256 value.

 export function parseU256(data: readonly FieldElement[], offset: number) {
   assertInBounds(data, offset + 1);
+  const low = BigInt(data[offset]);
+  const high = BigInt(data[offset + 1]);
+  const value = low + (high << 128n);
+  const MAX_U256 = (1n << 256n) - 1n;
+  if (value > MAX_U256) {
+    throw new ParseError(`Value exceeds maximum U256: ${value}`);
+  }
   return {
-    out: BigInt(data[offset]) + (BigInt(data[offset + 1]) << 128n),
+    out: value,
     offset: offset + 2,
   };
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function parseU256(data: readonly FieldElement[], offset: number) {
assertInBounds(data, offset + 1);
return {
out: BigInt(data[offset]) + (BigInt(data[offset + 1]) << 128n),
offset: offset + 2,
};
}
export function parseU256(data: readonly FieldElement[], offset: number) {
assertInBounds(data, offset + 1);
const low = BigInt(data[offset]);
const high = BigInt(data[offset + 1]);
const value = low + (high << 128n);
const MAX_U256 = (1n << 256n) - 1n;
if (value > MAX_U256) {
throw new ParseError(`Value exceeds maximum U256: ${value}`);
}
return {
out: value,
offset: offset + 2,
};
}

Comment on lines +105 to +121
export function parseArray<T>(type: Parser<T>): Parser<T[]> {
return (data: readonly FieldElement[], startingOffset: number) => {
let offset = startingOffset;
const length = BigInt(data[offset]);

offset++;

const out: T[] = [];
for (let i = 0; i < length; i++) {
const { out: item, offset: newOffset } = type(data, offset);
out.push(item);
offset = newOffset;
}

return { out, offset };
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add length validation in parseArray.

The parseArray function should validate the array length to prevent potential out-of-memory issues.

 export function parseArray<T>(type: Parser<T>): Parser<T[]> {
   return (data: readonly FieldElement[], startingOffset: number) => {
     let offset = startingOffset;
     const length = BigInt(data[offset]);
+    const MAX_ARRAY_LENGTH = 1024n; // Adjust based on your requirements
+    if (length > MAX_ARRAY_LENGTH) {
+      throw new ParseError(`Array length ${length} exceeds maximum allowed length ${MAX_ARRAY_LENGTH}`);
+    }
     offset++;
     const out: T[] = [];
     for (let i = 0; i < length; i++) {
       const { out: item, offset: newOffset } = type(data, offset);
       out.push(item);
       offset = newOffset;
     }
     return { out, offset };
   };
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function parseArray<T>(type: Parser<T>): Parser<T[]> {
return (data: readonly FieldElement[], startingOffset: number) => {
let offset = startingOffset;
const length = BigInt(data[offset]);
offset++;
const out: T[] = [];
for (let i = 0; i < length; i++) {
const { out: item, offset: newOffset } = type(data, offset);
out.push(item);
offset = newOffset;
}
return { out, offset };
};
}
export function parseArray<T>(type: Parser<T>): Parser<T[]> {
return (data: readonly FieldElement[], startingOffset: number) => {
let offset = startingOffset;
const length = BigInt(data[offset]);
const MAX_ARRAY_LENGTH = 1024n; // Adjust based on your requirements
if (length > MAX_ARRAY_LENGTH) {
throw new ParseError(`Array length ${length} exceeds maximum allowed length ${MAX_ARRAY_LENGTH}`);
}
offset++;
const out: T[] = [];
for (let i = 0; i < length; i++) {
const { out: item, offset: newOffset } = type(data, offset);
out.push(item);
offset = newOffset;
}
return { out, offset };
};
}

@fracek fracek merged commit c5a1ee9 into apibara:main Jan 17, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant