-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Index request transaction for multichain #346
Comments
If you pick Option 1 or 2, it might make sense to have Option 3 as a fallback, useful for covering the time until the node is synced or for quicker local or integration testing. |
It seems that both options 1 and 2 are the same in terms of tech behind it. You have only two options in the end :( You can explore NEAR Indexer Framework further and make it store data and push it to you simultaneously (in parallel) but anyway you'll have to handle gigabytes of nearcore data and maintain those nodes like upgrading it with new nearcore releases |
Yeah, same tech, just a different way of using Indexer Framework (i.e. embedded inside of our service vs a standalone microservice). I made the separation because my initial hunch was to just depend on the
This would be a pretty serious commitment yep, but I don't see any other way to build a trustworthy decentralized MPC service. If we depend on a centralized indexer (such as Lake Indexer), then an attacker can get access to everyone's funds just by posting a fake block into the S3 bucket. Option 3 would be okay for prototyping or for (decentralized) fallback, but not really okay as a primary source of transactions in the long run. |
I would say for the requiremens provided here, NEAR Lake Framework is not a good option. There is a big difference between 1) and 2) if MPC service can provide any other value to users at all in case they don't receive information from indexers. If that's the case, and the MPC server should stay up even if the indexer is down, it is better to separate it into a separate microservice. If the MPC server has to go down or become unresponsive when the indexing service is down for consistency reasons, indexer framework can be included into it. You will also get additional benefit of being able to communicate with NEAR Blockchain through Rust SDK vs making RPC calls if you need to. Nearcore upgrades will become a major maintenance task and you will need a robust upgrading process similar to how our RPC nodes are upgraded. |
I am leaning to option 1, since the indexer is a hard dependency here: failing and halting indexer is the equivalent to halting the MPC service. For example, if indexer fails, then the MPC node will be blind and won't see any new requests come in at all, effectively halting the node itself to doing any work on the network. But note that having a singular node being down isn't too much of an issue either since this is a threshold system, where the rest of the network can still keep running. We won't lose any state either, as most of it if not all, is stored in the contract itself, so halting isn't a huge issue on that front. The only thing I see with halting the MPC node is that all the triples it stockpiles will be potentially lost which is quite an expensive operation to do again. But we can have that in a persistent storage somewhere to counter that. |
@itegulov @khorolets and I had a talk on this topic. It is clear that we will not use NEAR Lake framework since it will kill decentralization and add latency. With our shift to mulitchain and signing transactions on MPC that is not acceptable. Both Options 1 and 2 are possible. |
What was the decision here at the end? |
Description
Assuming that developers will not be able to request MPC signatures via a system call, we will have to somehow make MPC network aware of requests happening on-chain. One simple way to do this is just by indexing all transactions happening on
multichain.near
(placeholder name) and look forsign(payload)
specifically (see the flow in #326 for full context).There are two options for indexing in the NEAR ecosystem:
See the comparison table from the docs:
As you can see, there are certain pros and cons which I would like to analyze before we commit to a specific solution. I can see three way how we can go about this:
(Option 1) Tightly coupled NEAR Indexer Framework
We use
near-indexer
crate directly in our code, makingmpc-recovery
a stateful service that embed nearcore inside.Pros:
Cons:
mpc-recovery
will have to store hundreds of GBs of onchain data (assuming we only store data from the last ~24 hours) and potentially will require a few days to catch-up from scratchmpc-recovery
)(Option 2) Separate NEAR Indexer Framework microservice
We create a separate microservice similar to https://github.com/near-examples/indexer-tx-watcher-example that just indexes a specific contract and provides an API for new transactions there. Whether the API should be streaming, polling or callback is up to debate. Every node
Pros:
mpc-recovery
that can be run in Cloud Run, making complicated infra management concentrated in the new indexer service. Assuming nothing terrible happens to the service we will never have to wait multiple days for it to catch up. But even if we do, then other working nodes can just kick us out while this particular node is not operational.Cons:
(Option 3) NEAR Lake Framework
We just stream data from S3 using an already built Rust client.
Pros:
Cons:
Final Thoughts
Regardless of which option we choose, we will have to build/write something custom for integration tests, which might be a significant time sink. My personal feelings are that Option 2 is the most reasonable one in the long run, but we can also opt for Option 3 for now if we are okay with doing double the work for integration tests.
The text was updated successfully, but these errors were encountered: