Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove view_state limit #145

Merged
merged 1 commit into from
Dec 13, 2023
Merged

remove view_state limit #145

merged 1 commit into from
Dec 13, 2023

Conversation

kobayurii
Copy link
Member

@kobayurii kobayurii commented Dec 13, 2023

Changes Implemented

1. Removed Key Limit: The previously imposed limitation on the number of keys in a state has been eliminated. This means our system can now handle a larger volume of keys without restrictions.
2. Optimization of Queries: We've implemented optimizations for both key and value queries. This enhancement ensures improved efficiency and performance when retrieving information related to keys and their corresponding values.

Temporary Solution
It's important to note that while these changes represent a substantial improvement, this solution is temporary. Although the system now allows for handling a greater number of keys, there's no absolute guarantee for handling exceptionally large datasets, such as those in the gigabyte range.

Future Steps

1. Pagination Implementation: The upcoming step will introduce new endpoints designed explicitly for pagination. These endpoints will facilitate the handling of large states by allowing the data to be retrieved in manageable chunks or pages.
2. Guaranteed Retrieval for Large States: These new endpoints will guarantee the retrieval of all keys, even for states with massive data volumes, ensuring that clients can reliably access and process extensive datasets without constraints.

This pull request lays the groundwork for future improvements, acknowledging the current enhancements while also outlining a clear path for addressing the handling of exceptionally large datasets in the upcoming iterations of the system.

@kobayurii kobayurii requested review from frol and khorolets December 13, 2023 13:58
@kobayurii kobayurii self-assigned this Dec 13, 2023
@kobayurii kobayurii marked this pull request as ready for review December 13, 2023 13:58
@kobayurii
Copy link
Member Author

#144

Copy link
Member

@khorolets khorolets left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@kobayurii kobayurii merged commit 0154109 into main Dec 13, 2023
3 checks passed
Comment on lines +51 to +60
for state_keys_chunk in state_keys.chunks(1000) {
// TODO: 1000 is hardcoded value. Make it configurable.
let mut tasks_futures = vec![];
for state_key in state_keys_chunk {
let state_value_result_future =
db_manager.get_state_key_value(account_id, block_height, state_key.clone());
tasks_futures.push(state_value_result_future);
}
let results = futures::future::join_all(tasks_futures).await;
for (state_key, state_value) in results.into_iter() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, this code will suffer from waiting the longest request out of 1000 in order to move on to the next batch of 1000. You can do better if you turn it into a stream of concurrent requests (FuturesUnordered can be the solution, but you don't really need to use it explicitly):

https://github.com/near/near-cli-rs/blob/7aca0acfe772fa7bc2c06c470e68ae8d05edd94f/src/commands/account/view_account_summary/mod.rs#L62-L82 (you don't need block_on here, obviously, use .await instead

@kobayurii kobayurii deleted the state_limit branch December 20, 2023 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants