Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: document u128 feature #4225

Merged
merged 11 commits into from
Feb 6, 2024
49 changes: 49 additions & 0 deletions docs/docs/noir/concepts/data_types/integers.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,55 @@ If you are using the default proving backend with Noir, both even (e.g. _u2_, _i

:::


## 128 bits Unsigned Integers

The built-in structure `U128` allows you to use 128-bit unsigned integers almost like a native integer type. However, there are some differences to keep in mind:
- You cannot cast between a native integer and `U128`
- There is a higher performance cost when using `U128`, compared to a native type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question here @guipublic don't all casts to unsigned integers have a performance cost? I was under that impression (because range checks).

Is this cost simply higher than usual? If not, we should maybe point to the performance cost at the top of the page and remove this one to avoid confusion

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no cast for U128, as explained in the doc, so I am not sure to understand your question.
In term of performance, converting an integer to U128 will have no cost (because U128 is the biggest), and converting to a lower bit size will not have a specific drawback because of U128, just what you should expect.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess what @signorecello meant was working with native uints is less performant than working with Fields.

This section reads like performance costs would rank as U128 > native uint > Field.
Is that the correct way to understand it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the higher the bit size, the higher is the cost. But because U128 uses 2 limbs, cost of arithmetic operations are even higher, especially for multiplication.


Conversion between unsigned integer types and U128 are done through the use of `from_integer` and `to_integer` functions.

```rust
fn main() {
let x = U128::from_integer(23);
let y = U128::from_hex("0x7");
let z = x + y;
assert(z.to_integer() == 30);
}
```

`U128` is implemented with two 64 bits limbs, representing the low and high bits, which explains the performance cost. You should expect `U128` to be twice more costly for addition and four times more costly for multiplication.
You can construct a U128 from its limbs:
```rust
fn main(x: u64, y: u64) {
let x = U128::from_u64s_be(x,y);
assert(z.hi == x as Field);
assert(z.lo == y as Field);
}
```

Note that the limbs are stored as Field elements in order to avoid unnecessary conversions.
A part from this, most operations will work as usual:

```rust
fn main(x: U128, y: U128) {
// multiplication
let c = x * y;
//addition and subtraction
let c = c - x + y;
// division
let c = x / y;
// bit operation;
let c = x & y | y;
// bit shift
let c = x << y;
// comparisons;
let c = x < y;
let c = x == y;
}
```

## Overflows

Computations that exceed the type boundaries will result in overflow errors. This happens with both signed and unsigned integers. For example, attempting to prove:
Expand Down
Loading