Skip to content

Commit

Permalink
fix: typos (near#9179)
Browse files Browse the repository at this point in the history
fix: typos
  • Loading branch information
VanBarbascu authored Nov 2, 2023
2 parents f9e6707 + 3a2da81 commit 04f4b0c
Show file tree
Hide file tree
Showing 14 changed files with 31 additions and 31 deletions.
4 changes: 2 additions & 2 deletions docs/architecture/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ expensive --timeout=1800 near-client near_client tests::catching_up::test_catchu
For more details regarding nightly tests see `nightly/README.md`.

Note that what counts as a slow test isn’t exactly defined as of now.
If it takes just a couple seconds than it’s probably fine. Anything
If it takes just a couple seconds then it’s probably fine. Anything
slower should probably be classified as an expensive test. In
particular, if libtest complains the test takes more than 60 seconds
than it definitely is and expensive test.
then it definitely is and expensive test.
2 changes: 1 addition & 1 deletion docs/architecture/gas/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ are always full. But that is rarely the case, as the gas price increases
exponentially when chunks are full, which would cause traffic to go back
eventually.

Futhermore, the hardware has to be ready for transactions of all types,
Furthermore, the hardware has to be ready for transactions of all types,
including transactions chosen by a malicious actor selecting only the most
complex transactions. Those transactions can also be unbalanced in what
bottlenecks they hit. For example, a chunk can be filled with transactions that
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/gas/estimator.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ per 1 Tgas of execution, we spend no more than 1ms wall-clock time.

For now, nearcore timing is the only one that matters. Things will become more
complicated once there are multiple client implementations. But knowing that
nearcore can serve requests fast enough prooves that it is possible to be at
nearcore can serve requests fast enough proves that it is possible to be at
least as fast. However, we should be careful to not couple costs too tightly
with the specific implementation of nearcore to allow for innovation in new
clients.
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/how/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ do today):
the current head of the node, the head is updated.
5. The node checks whether any blocks in the `OrphanPool` are ready to be
processed in a BFS order and processes all of them until none can be
processed any more. Note that a block is put into the `OrphanPool` if and
processed anymore. Note that a block is put into the `OrphanPool` if and
only if its previous block is not accepted.
6. Upon acceptance of a block, the node would check whether it needs to run
garbage collection. If it needs to, it would garbage collect two blocks worth
Expand Down
12 changes: 6 additions & 6 deletions docs/architecture/how/cross-shard.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Cross shard transactions - deep dive

In this article, we'll look deeper into how cross-shard transactions are working
on the simple example of user `shard0` transfering money to user `shard1`.
on the simple example of user `shard0` transferring money to user `shard1`.

These users are on separate shards (`shard0` is on shard 0 and `shard1` is on
shard 1).
Expand Down Expand Up @@ -50,7 +50,7 @@ As the first step, we want to change this transaction into a Receipt (a.k.a
* the message signature matches (that is - that this message was actually signed
by this key)
* that this key is authorized to act on behalf of that account (so it is a full
access key to this account - or a valid fuction key).
access key to this account - or a valid function key).

The last point above means, that we MUST execute this (Transaction to Receipt)
transition within the shard that the `signer` belongs to (as other shards don't
Expand Down Expand Up @@ -124,7 +124,7 @@ Chunk: Ok(
```

**Side note:** When we're converting the transaction into a receipt, we also use
this moment to deduct prepaid gas fees and transfered tokens from the 'signer'
this moment to deduct prepaid gas fees and transferred tokens from the 'signer'
account. The details on how much gas is charged can be found at https://nomicon.io/RuntimeSpec/Fees/.

## Step 2 - cross shard receipt
Expand Down Expand Up @@ -298,7 +298,7 @@ So putting it all together would look like this:
But wait - NEAR was saying that transfers are happening with 2 blocks - but here
I see that it took 3 blocks. What's wrong?

The image above is a simplification, and reality is a little bit tricker -
The image above is a simplification, and reality is a little bit trickier -
especially as receipts in a given chunks are actually receipts received as a
result from running a PREVIOUS chunk from this shard.

Expand All @@ -317,7 +317,7 @@ So our image should look more like this:
In this example, the black boxes are representing the 'processing' of the chunk,
and red arrows are cross-shard communication.

So when we process Shard 0 from block 1676, we read the transation, and output
So when we process Shard 0 from block 1676, we read the transaction, and output
the receipt - which later becomes the input for shard 1 in block 1677.

But you might still be wondering - so why didn't we add the Receipt (transfer)
Expand All @@ -337,4 +337,4 @@ result (receipt) into next block's chunk.
<!-- TODO: maybe add the link to that article here? -->
In a future article, we'll discuss how the actual cross-shard communication
works (red arrows) in the picture, and how we could guarantee that a given shard
really gets all the red arrows, before is starts processing.
really gets all the red arrows, before it starts processing.
4 changes: 2 additions & 2 deletions docs/architecture/how/meta-tx.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ concept, implemented off-chain. Think of it as a server that accepts a
`SignedDelegateAction`, does some checks on them and eventually forwards it
inside a transaction to the blockchain network.

A relayer may chose to offer their service for free but that's not going to be
A relayer may choose to offer their service for free but that's not going to be
financially viable long-term. But they could easily have the user pay using
other means, outside of Near blockchain. And with some tricks, it can even be
paid using fungible tokens on Near.
Expand Down Expand Up @@ -106,7 +106,7 @@ this again requires some level of trust between the relayer and Alice.

A potential solution could involve linear dependencies between the action
receipts spawned from a single meta transaction. Only if the first succeeds,
will the second start executing,and so on. But this quickly gets too complicated
will the second start executing, and so on. But this quickly gets too complicated
for the MVP and is therefore left open for future improvements.

## Constraints on the actions inside a meta transaction
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/how/resharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ That's why we cannot move it column 'in trie parts' (like we did for others), bu

## What should we improve to achieve 'stressless' resharding?

Currently trie is split sequencially (shard by shard), and also sequancially within a shard - by iterating over all the elements.
Currently trie is split sequentially (shard by shard), and also sequentially within a shard - by iterating over all the elements.

This process must finish within a single epoch - which is might be challenging (especially for larger archival nodes).

Expand Down
4 changes: 2 additions & 2 deletions docs/architecture/how/serialization.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ BorshSerialize/BorshDeserialize

## Questions

### Why don’t you use JSON for everything ?
### Why don’t you use JSON for everything?

While this is a tempting option, JSON has a few drawbacks:

Expand Down Expand Up @@ -153,7 +153,7 @@ Here, we have children at index 0 and 2 which has a bitmap of `101`
Custom encoder:

```
// Number of children detetermined by the bitmask
// Number of children determined by the bitmask
[16 bits bitmask][32 bytes child][32 bytes child]
[5][0x11][0x12]
// Total size: 2 + 32 + 32 = 68 bytes
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/how/tx_routing.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ What happens afterwards will be covered in future episodes/articles.

### Transaction being added multiple times

But such a approach means, that we’re forwarding the same transaction to multiple
But such an approach means, that we’re forwarding the same transaction to multiple
validators (currently 4) - so can it be added multiple times?

No. Remember that a transaction has a concrete hash which is used as a global
Expand Down
8 changes: 4 additions & 4 deletions docs/architecture/network.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Responsibilities:
First, the `PeerManagerActor` actor gets started. `PeerManagerActor` opens the
TCP server, which listens to incoming connections. It starts the
`RoutingTableActor`, which then starts the `EdgeValidatorActor`. When
a incoming connection gets accepted, it starts a new `PeerActor`
an incoming connection gets accepted, it starts a new `PeerActor`
on its own thread.

# 4. NetworkConfig
Expand All @@ -93,7 +93,7 @@ Here is a list of features read from config:
* `boot_nodes` - list of nodes to connect to on start.
* `addr` - listening address.
* `max_num_peers` - by default we connect up to 40 peers, current implementation
supports upto 128.
supports up to 128.

# 5. Connecting to other peers.

Expand Down Expand Up @@ -389,8 +389,8 @@ Routing table computation does a few things:
* Removes unreachable edges from memory and stores them to disk.
* The distance is calculated as the minimum number of nodes on the path from
given node `A`, to each other node on the network. That is, `A` has a distance
of `0` to itself. It's neighbors will have a distance of `1`. The neighbors of
theirs neighbors will have a distance of `2`, etc.
of `0` to itself. Its neighbors will have a distance of `1`. The neighbors of
their neighbors will have a distance of `2`, etc.

## 9.1 Step 1

Expand Down
4 changes: 2 additions & 2 deletions docs/architecture/next/catchup_and_state_sync.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ As we progress towards phase 2 and keep increasing number of shards - the catchu
This means that we have to do some larger changes to the state sync design, as requirements start to differ a lot:
* catchups are high priority (the validator MUST catchup within 1 epoch - otherwise it will not be able to produce blocks for the new shards in the next epoch - and therefore it will not earn rewards).
* a lot more catchups in progress (with lots of shards basically every validator would have to catchup at least one shard at each epoch boundary) - this leads to a lot more potential traffic on the network
* malicious attacks & incentives - the state data can be large and can cause a lot of network traffic. At the same time it is quite critical (see point above), so we'll have to make sure that the nodes are incetivised to provide the state parts upon request.
* malicious attacks & incentives - the state data can be large and can cause a lot of network traffic. At the same time it is quite critical (see point above), so we'll have to make sure that the nodes are incentivised to provide the state parts upon request.
* only a subset of peers will be available to request the state sync from (as not everyone from our peers will be tracking the shard that we're interested in).


Expand All @@ -34,7 +34,7 @@ We're looking at the performance of state sync:

### Better performance on the requestor side

Currently the parts are applied only once all them are downloaded - instead we should try to apply them in parallel - after each part is received.
Currently the parts are applied only once all of them are downloaded - instead we should try to apply them in parallel - after each part is received.

When we receive a part, we should announce this information to our peers - so that they know that they can request it from us if they need it.

Expand Down
12 changes: 6 additions & 6 deletions docs/architecture/next/malicious_chunk_producer_and_phase2.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Malicious producers in phase 2 of sharding.

In this document, we'll compare the impact of the hypothethical malicious producer on the NEAR system (both in the current setup and how it will work when phase2 is implemented).
In this document, we'll compare the impact of the hypothetical malicious producer on the NEAR system (both in the current setup and how it will work when phase2 is implemented).

## Current state (Phase 1)

Expand Down Expand Up @@ -34,7 +34,7 @@ powerful machines with > 100 cores).
So in the similar scenario as above - ``C1`` creates a malicious chunks, and
sends it to ``B1``, which includes it in the block.

And here's where the complexity starts - as most of the valiators will NOT
And here's where the complexity starts - as most of the validators will NOT
track the shard which ``C1`` was producing - so they will still sign the block.

The validators that do track that shard will of course (assuming that they are non-malicious) refuse the sign. But overall, they will be a small majority - so the block is going to get enough signatures and be added to the chain.
Expand All @@ -61,7 +61,7 @@ and it to the next block.
Then the validators do the verification themselves, and if successful, they
sign the block.

When such block is succesfully signed, the protocol automatically slashes
When such block is successfully signed, the protocol automatically slashes
malicious nodes (more details below) and initiates the rollback to bring the
state back to the state before the bad chunk (so in our case, back to the block
produced by `B0`).
Expand All @@ -72,7 +72,7 @@ produced by `B0`).
Slashing is the process of taking away the part of the stake from validators
that are considered malicious.

In the example above, we'll definately need to slash the ``C1`` - and potentially also any validators that were tracking that shard and did sign the bad block.
In the example above, we'll definitely need to slash the ``C1`` - and potentially also any validators that were tracking that shard and did sign the bad block.

Things that we'll have to figure out in the future:
* how much do we slash? all of the stake? some part?
Expand All @@ -86,7 +86,7 @@ Things that we'll have to figure out in the future:
## Problems with the current Phase 2 design

### Is slashing painful enough?
In the example above, we'd succesfully slash the ``C1`` producer - but was it
In the example above, we'd successfully slash the ``C1`` producer - but was it
enough?

Currently (with 4 shards) you need around 20k NEAR to become a chunk producer.
Expand All @@ -105,4 +105,4 @@ chunk that you produced would be marked as malicious, and you'd lose your stake

So the open question is - can we do something 'smarter' in the protocol to
detect the case, where there is 'just a single' malicious (or buggy) chunk
producer and avoid the expensive rollback?
producer and avoid the expensive rollback?
2 changes: 1 addition & 1 deletion docs/architecture/storage/database.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ We store the database in RocksDB. This document is an attempt to give hints abou
- In this family, each key is of the form `BlockHash | Column | AdditionalInfo` where:
+ `BlockHash: [u8; 32]` is the block hash for this change
+ `Column: u8` is defined near the top of `core/primitives/src/trie_key.rs`
+ `AdditionalInfo` depends on `Column` and is can be found in the code for the `TrieKey` struct, same file as `Column`
+ `AdditionalInfo` depends on `Column` and it can be found in the code for the `TrieKey` struct, same file as `Column`

### Contract Deployments

Expand Down
2 changes: 1 addition & 1 deletion docs/practices/protocol_upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ For mainnet releases, the release on github typically happens on a Monday or Tue
typically happens a week later and the protocol version upgrade happens 1-2 epochs after the voting. This
gives the node maintainers enough time to upgrade their neard nodes. The node maintainers can upgrade
their nodes at any time between the release and the voting but it is recommended to upgrade soon after the
release. This is to accomodate for any database migrations or miscellaneous delays.
release. This is to accommodate for any database migrations or miscellaneous delays.

Starting a neard node with protocol version voting in the future in a network that is already operating
at that protocol version is supported as well. This is useful in the scenario where there is a mainnet
Expand Down

0 comments on commit 04f4b0c

Please sign in to comment.