diff --git a/.github/workflows/deploy-docs.yml b/.github/workflows/deploy-docs.yml index d1135d1e2..42e9c103c 100644 --- a/.github/workflows/deploy-docs.yml +++ b/.github/workflows/deploy-docs.yml @@ -5,7 +5,7 @@ on: branches: - main paths: - - 'docs/*' + - 'docs/**' permissions: contents: write diff --git a/docs/docs/icicle/golang-bindings.md b/docs/docs/icicle/golang-bindings.md index c0004e522..6b832cdda 100644 --- a/docs/docs/icicle/golang-bindings.md +++ b/docs/docs/icicle/golang-bindings.md @@ -1,3 +1,105 @@ # Golang bindings -Golang is WIP in v1, coming soon. Please checkout a previous [release v0.1.0](https://github.com/ingonyama-zk/icicle/releases/tag/v0.1.0) for golang bindings. +Golang bindings allow you to use ICICLE as a golang library. +The source code for all Golang libraries can be found [here](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/golang). + +The Golang bindings are comprised of multiple packages. + +[`core`](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/golang/core) which defines all shared methods and structures, such as configuration structures, or memory slices. + +[`cuda-runtime`](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/golang/cuda_runtime) which defines abstractions for CUDA methods for allocating memory, initializing and managing streams, and `DeviceContext` which enables users to define and keep track of devices. + +Each curve has its own package which you can find [here](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/golang/curves). If your project uses BN254 you only need to install that single package named [`bn254`](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/golang/curves/bn254). + +## Using ICICLE Golang bindings in your project + +To add ICICLE to your `go.mod` file. + +```bash +go get github.com/ingonyama-zk/icicle +``` + +If you want to specify a specific branch + +```bash +go get github.com/ingonyama-zk/icicle@ +``` + +For a specific commit + +```bash +go get github.com/ingonyama-zk/icicle@ +``` + +To build the shared libraries you can run this script: + +``` +./build [G2_enabled] + +curve - The name of the curve to build or "all" to build all curves +G2_enabled - Optional - To build with G2 enabled +``` + +For example if you want to build all curves with G2 enabled you would run: + +```bash +./build.sh all ON +``` + +If you are interested in building a specific curve you would run: + +```bash +./build.sh bls12_381 ON +``` + +Now you can import ICICLE into your project + +```golang +import ( + "github.com/stretchr/testify/assert" + "testing" + + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) +... +``` + +## Running tests + +To run all tests, for all curves: + +```bash +go test --tags=g2 ./... -count=1 +``` + +If you dont want to include g2 tests then drop `--tags=g2`. + +If you wish to run test for a specific curve: + +```bash +go test -count=1 +``` + +## How do Golang bindings work? + +The libraries produced from the CUDA code compilation are used to bind Golang to ICICLE's CUDA code. + +1. These libraries (named `libingo_.a`) can be imported in your Go project to leverage the GPU accelerated functionalities provided by ICICLE. + +2. In your Go project, you can use `cgo` to link these libraries. Here's a basic example on how you can use `cgo` to link these libraries: + +```go +/* +#cgo LDFLAGS: -L/path/to/shared/libs -lingo_bn254 +#include "icicle.h" // make sure you use the correct header file(s) +*/ +import "C" + +func main() { + // Now you can call the C functions from the ICICLE libraries. + // Note that C function calls are prefixed with 'C.' in Go code. +} +``` + +Replace `/path/to/shared/libs` with the actual path where the shared libraries are located on your system. diff --git a/docs/docs/icicle/golang-bindings/msm.md b/docs/docs/icicle/golang-bindings/msm.md new file mode 100644 index 000000000..1f4bd5c60 --- /dev/null +++ b/docs/docs/icicle/golang-bindings/msm.md @@ -0,0 +1,200 @@ +# MSM + + +### Supported curves + +`bls12-377`, `bls12-381`, `bn254`, `bw6-761` + +## MSM Example + +```go +package main + +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) + +func Main() { + // Obtain the default MSM configuration. + cfg := GetDefaultMSMConfig() + + // Define the size of the problem, here 2^18. + size := 1 << 18 + + // Generate scalars and points for the MSM operation. + scalars := GenerateScalars(size) + points := GenerateAffinePoints(size) + + // Create a CUDA stream for asynchronous operations. + stream, _ := cr.CreateStream() + var p Projective + + // Allocate memory on the device for the result of the MSM operation. + var out core.DeviceSlice + _, e := out.MallocAsync(p.Size(), p.Size(), stream) + + if e != cr.CudaSuccess { + panic(e) + } + + // Set the CUDA stream in the MSM configuration. + cfg.Ctx.Stream = &stream + cfg.IsAsync = true + + // Perform the MSM operation. + e = Msm(scalars, points, &cfg, out) + + if e != cr.CudaSuccess { + panic(e) + } + + // Allocate host memory for the results and copy the results from the device. + outHost := make(core.HostSlice[Projective], 1) + cr.SynchronizeStream(&stream) + outHost.CopyFromDevice(&out) + + // Free the device memory allocated for the results. + out.Free() +} +``` + +## MSM Method + +```go +func Msm(scalars core.HostOrDeviceSlice, points core.HostOrDeviceSlice, cfg *core.MSMConfig, results core.HostOrDeviceSlice) cr.CudaError +``` + +### Parameters + +- **scalars**: A slice containing the scalars for multiplication. It can reside either in host memory or device memory. +- **points**: A slice containing the points to be multiplied with scalars. Like scalars, these can also be in host or device memory. +- **cfg**: A pointer to an `MSMConfig` object, which contains various configuration options for the MSM operation. +- **results**: A slice where the results of the MSM operation will be stored. This slice can be in host or device memory. + +### Return Value + +- **CudaError**: Returns a CUDA error code indicating the success or failure of the MSM operation. + +## MSMConfig + +The `MSMConfig` structure holds configuration parameters for the MSM operation, allowing customization of its behavior to optimize performance based on the specifics of the operation or the underlying hardware. + +```go +type MSMConfig struct { + Ctx cr.DeviceContext + PrecomputeFactor int32 + C int32 + Bitsize int32 + LargeBucketFactor int32 + batchSize int32 + areScalarsOnDevice bool + AreScalarsMontgomeryForm bool + arePointsOnDevice bool + ArePointsMontgomeryForm bool + areResultsOnDevice bool + IsBigTriangle bool + IsAsync bool +} +``` + +### Fields + +- **Ctx**: Device context containing details like device id and stream. +- **PrecomputeFactor**: Controls the number of extra points to pre-compute. +- **C**: Window bitsize, a key parameter in the "bucket method" for MSM. +- **Bitsize**: Number of bits of the largest scalar. +- **LargeBucketFactor**: Sensitivity to frequently occurring buckets. +- **batchSize**: Number of results to compute in one batch. +- **areScalarsOnDevice**: Indicates if scalars are located on the device. +- **AreScalarsMontgomeryForm**: True if scalars are in Montgomery form. +- **arePointsOnDevice**: Indicates if points are located on the device. +- **ArePointsMontgomeryForm**: True if point coordinates are in Montgomery form. +- **areResultsOnDevice**: Indicates if results are stored on the device. +- **IsBigTriangle**: If `true` MSM will run in Large triangle accumulation if `false` Bucket accumulation will be chosen. Default value: false. +- **IsAsync**: If true, runs MSM asynchronously. + +### Default Configuration + +Use `GetDefaultMSMConfig` to obtain a default configuration, which can then be customized as needed. + +```go +func GetDefaultMSMConfig() MSMConfig +``` + + +## How do I toggle between the supported algorithms? + +When creating your MSM Config you may state which algorithm you wish to use. `cfg.Ctx.IsBigTriangle = true` will activate Large triangle accumulation and `cfg.Ctx.IsBigTriangle = false` will activate Bucket accumulation. + +```go +... + +// Obtain the default MSM configuration. +cfg := GetDefaultMSMConfig() + +cfg.Ctx.IsBigTriangle = true + +... +``` + +## How do I toggle between MSM modes? + +Toggling between MSM modes occurs automatically based on the number of results you are expecting from the `MSM` function. + +The number of results is interpreted from the size of `var out core.DeviceSlice`. Thus its important when allocating memory for `var out core.DeviceSlice` to make sure that you are allocating ` X `. + +```go +... + +batchSize := 3 +var p G2Projective +var out core.DeviceSlice +out.Malloc(batchSize*p.Size(), p.Size()) + +... +``` + +## Support for G2 group + +To activate G2 support first you must make sure you are building the static libraries with G2 feature enabled. + +```bash +./build.sh bls12_381 ON +``` + +Now when importing `icicle`, you should have access to G2 features. + +```go +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" +) +``` + +These features include `G2Projective` and `G2Affine` points as well as a `G2Msm` method. + +```go +... + +cfg := GetDefaultMSMConfig() +size := 1 << 12 +batchSize := 3 +totalSize := size * batchSize +scalars := GenerateScalars(totalSize) +points := G2GenerateAffinePoints(totalSize) + +var p G2Projective +var out core.DeviceSlice +out.Malloc(batchSize*p.Size(), p.Size()) +G2Msm(scalars, points, &cfg, out) + +... +``` + +`G2Msm` works the same way as normal MSM, the difference is that it uses G2 Points. + +Additionally when you are building your application make sure to use the g2 feature flag + +```bash +go build -tags=g2 +``` diff --git a/docs/docs/icicle/golang-bindings/ntt.md b/docs/docs/icicle/golang-bindings/ntt.md new file mode 100644 index 000000000..3ddd2394a --- /dev/null +++ b/docs/docs/icicle/golang-bindings/ntt.md @@ -0,0 +1,100 @@ +# NTT + +### Supported curves + +`bls12-377`, `bls12-381`, `bn254`, `bw6-761` + +## NTT Example + +```go +package main + +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) + +func Main() { + // Obtain the default NTT configuration with a predefined coset generator. + cfg := GetDefaultNttConfig() + + // Define the size of the input scalars. + size := 1 << 18 + + // Generate scalars for the NTT operation. + scalars := GenerateScalars(size) + + // Set the direction of the NTT (forward or inverse). + dir := core.KForward + + // Allocate memory for the results of the NTT operation. + results := make(core.HostSlice[ScalarField], size) + + // Perform the NTT operation. + err := Ntt(scalars, dir, &cfg, results) + if err != cr.CudaSuccess { + panic("NTT operation failed") + } +} +``` + +## NTT Method + +```go +func Ntt[T any](scalars core.HostOrDeviceSlice, dir core.NTTDir, cfg *core.NTTConfig[T], results core.HostOrDeviceSlice) core.IcicleError +``` + +### Parameters + +- **scalars**: A slice containing the input scalars for the transform. It can reside either in host memory or device memory. +- **dir**: The direction of the NTT operation (`KForward` or `KInverse`). +- **cfg**: A pointer to an `NTTConfig` object, containing configuration options for the NTT operation. +- **results**: A slice where the results of the NTT operation will be stored. This slice can be in host or device memory. + +### Return Value + +- **CudaError**: Returns a CUDA error code indicating the success or failure of the NTT operation. + +## NTT Configuration (NTTConfig) + +The `NTTConfig` structure holds configuration parameters for the NTT operation, allowing customization of its behavior to optimize performance based on the specifics of your protocol. + +```go +type NTTConfig[T any] struct { + Ctx cr.DeviceContext + CosetGen T + BatchSize int32 + Ordering Ordering + areInputsOnDevice bool + areOutputsOnDevice bool + IsAsync bool +} +``` + +### Fields + +- **Ctx**: Device context containing details like device ID and stream ID. +- **CosetGen**: Coset generator used for coset (i)NTTs, defaulting to no coset being used. +- **BatchSize**: The number of NTTs to compute in one operation, defaulting to 1. +- **Ordering**: Ordering of inputs and outputs (`KNN`, `KNR`, `KRN`, `KRR`, `KMN`, `KNM`), affecting how data is arranged. +- **areInputsOnDevice**: Indicates if input scalars are located on the device. +- **areOutputsOnDevice**: Indicates if results are stored on the device. +- **IsAsync**: Controls whether the NTT operation runs asynchronously. + +### Default Configuration + +Use `GetDefaultNTTConfig` to obtain a default configuration, customizable as needed. + +```go +func GetDefaultNTTConfig[T any](cosetGen T) NTTConfig[T] +``` + +### Initializing the NTT Domain + +Before performing NTT operations, it's necessary to initialize the NTT domain; it only needs to be called once per GPU since the twiddles are cached. + +```go +func InitDomain(primitiveRoot ScalarField, ctx cr.DeviceContext, fastTwiddles bool) core.IcicleError +``` + +This function initializes the domain with a given primitive root, optionally using fast twiddle factors to optimize the computation. diff --git a/docs/docs/icicle/golang-bindings/vec-ops.md b/docs/docs/icicle/golang-bindings/vec-ops.md new file mode 100644 index 000000000..69dbe5b9e --- /dev/null +++ b/docs/docs/icicle/golang-bindings/vec-ops.md @@ -0,0 +1,132 @@ +# Vector Operations + +## Overview + +The VecOps API provides efficient vector operations such as addition, subtraction, and multiplication. + +## Example + +### Vector addition + +```go +package main + +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) + +func main() { + testSize := 1 << 12 + a := GenerateScalars(testSize) + b := GenerateScalars(testSize) + out := make(core.HostSlice[ScalarField], testSize) + cfg := core.DefaultVecOpsConfig() + + // Perform vector addition + err := VecOp(a, b, out, cfg, core.Add) + if err != cr.CudaSuccess { + panic("Vector addition failed") + } +} +``` + +### Vector Subtraction + +```go +package main + +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) + +func main() { + testSize := 1 << 12 + a := GenerateScalars(testSize) + b := GenerateScalars(testSize) + out := make(core.HostSlice[ScalarField], testSize) + cfg := core.DefaultVecOpsConfig() + + // Perform vector subtraction + err := VecOp(a, b, out, cfg, core.Sub) + if err != cr.CudaSuccess { + panic("Vector subtraction failed") + } +} +``` + +### Vector Multiplication + +```go +package main + +import ( + "github.com/ingonyama-zk/icicle/wrappers/golang/core" + cr "github.com/ingonyama-zk/icicle/wrappers/golang/cuda_runtime" +) + +func main() { + testSize := 1 << 12 + a := GenerateScalars(testSize) + b := GenerateScalars(testSize) + out := make(core.HostSlice[ScalarField], testSize) + cfg := core.DefaultVecOpsConfig() + + // Perform vector multiplication + err := VecOp(a, b, out, cfg, core.Mul) + if err != cr.CudaSuccess { + panic("Vector multiplication failed") + } +} +``` + +## VecOps Method + +```go +func VecOp(a, b, out core.HostOrDeviceSlice, config core.VecOpsConfig, op core.VecOps) (ret cr.CudaError) +``` + +### Parameters + +- **a**: The first input vector. +- **b**: The second input vector. +- **out**: The output vector where the result of the operation will be stored. +- **config**: A `VecOpsConfig` object containing various configuration options for the vector operations. +- **op**: The operation to perform, specified as one of the constants (`Sub`, `Add`, `Mul`) from the `VecOps` type. + +### Return Value + +- **CudaError**: Returns a CUDA error code indicating the success or failure of the vector operation. + +## VecOpsConfig + +The `VecOpsConfig` structure holds configuration parameters for the vector operations, allowing customization of its behavior. + +```go +type VecOpsConfig struct { + Ctx cr.DeviceContext + isAOnDevice bool + isBOnDevice bool + isResultOnDevice bool + IsResultMontgomeryForm bool + IsAsync bool +} +``` + +### Fields + +- **Ctx**: Device context containing details like device ID and stream ID. +- **isAOnDevice**: Indicates if vector `a` is located on the device. +- **isBOnDevice**: Indicates if vector `b` is located on the device. +- **isResultOnDevice**: Specifies where the result vector should be stored (device or host memory). +- **IsResultMontgomeryForm**: Determines if the result vector should be in Montgomery form. +- **IsAsync**: Controls whether the vector operation runs asynchronously. + +### Default Configuration + +Use `DefaultVecOpsConfig` to obtain a default configuration, customizable as needed. + +```go +func DefaultVecOpsConfig() VecOpsConfig +``` diff --git a/docs/docs/icicle/primitives/msm.md b/docs/docs/icicle/primitives/msm.md index 9b4b83359..81e5ce6f0 100644 --- a/docs/docs/icicle/primitives/msm.md +++ b/docs/docs/icicle/primitives/msm.md @@ -49,13 +49,17 @@ Accelerating MSM is crucial to a ZK protocol's performance due to the [large per You can learn more about how MSMs work from this [video](https://www.youtube.com/watch?v=Bl5mQA7UL2I) and from our resource list on [Ingopedia](https://www.ingonyama.com/ingopedia/msm). -# Using MSM - ## Supported curves MSM supports the following curves: -`bls12-377`, `bls12-381`, `bn-254`, `bw6-761`, `grumpkin` +`bls12-377`, `bls12-381`, `bn254`, `bw6-761`, `grumpkin` + + +## Supported Bindings + +- [Golang](../golang-bindings/msm.md) +- [Rust](../rust-bindings//msm.md) ## Supported algorithms @@ -79,25 +83,6 @@ Large Triangle Accumulation is a method for optimizing MSM which focuses on redu The Large Triangle Accumulation algorithm is more sequential in nature, as it builds upon each step sequentially (accumulating sums and then performing doubling). This structure can make it less suitable for parallelization but potentially more efficient for a large batch of smaller MSM computations. - -### How do I toggle between the supported algorithms? - -When creating your MSM Config you may state which algorithm you wish to use. `is_big_triangle=true` will activate Large triangle accumulation and `is_big_triangle=false` will activate Bucket accumulation. - -```rust -... - -let mut cfg_bls12377 = msm::get_default_msm_config::(); - -// is_big_triangle will determine which algorithm to use -cfg_bls12377.is_big_triangle = true; - -msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap(); -... -``` - -You may reference the rust code [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L54). - ## MSM Modes ICICLE MSM also supports two different modes `Batch MSM` and `Single MSM` @@ -109,54 +94,3 @@ Batch MSM allows you to run many MSMs with a single API call, Single MSM will la This decision is highly dependent on your use case and design. However, if your design allows for it, using batch mode can significantly improve efficiency. Batch processing allows you to perform multiple MSMs leveraging the parallel processing capabilities of GPUs. Single MSM mode should be used when batching isn't possible or when you have to run a single MSM. - -### How do I toggle between MSM modes? - -Toggling between MSM modes occurs automatically based on the number of results you are expecting from the `msm::msm` function. If you are expecting an array of `msm_results`, ICICLE will automatically split `scalars` and `points` into equal parts and run them as multiple MSMs in parallel. - -```rust -... - -let mut msm_result: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap(); -msm::msm(&scalars, &points, &cfg, &mut msm_result).unwrap(); - -... -``` - -In the example above we allocate a single expected result which the MSM method will interpret as `batch_size=1` and run a single MSM. - - -In the next example, we are expecting 10 results which sets `batch_size=10` and runs 10 MSMs in batch mode. - -```rust -... - -let mut msm_results: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(10).unwrap(); -msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap(); - -... -``` - -Here is a [reference](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L108) to the code which automatically sets the batch size. For more MSM examples have a look [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/examples/rust/msm/src/main.rs#L1). - - -## Support for G2 group - -MSM also supports G2 group. - -Using MSM in G2 requires a G2 config, and of course your Points should also be G2 Points. - -```rust -... - -let scalars = HostOrDeviceSlice::Host(upper_scalars[..size].to_vec()); -let g2_points = HostOrDeviceSlice::Host(g2_upper_points[..size].to_vec()); -let mut g2_msm_results: HostOrDeviceSlice<'_, G2Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap(); -let mut g2_cfg = msm::get_default_msm_config::(); - -msm::msm(&scalars, &g2_points, &g2_cfg, &mut g2_msm_results).unwrap(); - -... -``` - -Here you can [find an example](https://github.com/ingonyama-zk/icicle/blob/5a96f9937d0a7176d88c766bd3ef2062b0c26c37/examples/rust/msm/src/main.rs#L114) of MSM on G2 Points. diff --git a/docs/docs/icicle/primitives/ntt.md b/docs/docs/icicle/primitives/ntt.md index b5c782c7f..bac22944f 100644 --- a/docs/docs/icicle/primitives/ntt.md +++ b/docs/docs/icicle/primitives/ntt.md @@ -28,6 +28,10 @@ NTT supports the following curves: `bls12-377`, `bls12-381`, `bn-254`, `bw6-761` +## Supported Bindings + +- [Golang](../golang-bindings/ntt.md) +- [Rust](../rust-bindings/ntt.md) ### Examples @@ -35,87 +39,6 @@ NTT supports the following curves: - [C++ API examples](https://github.com/ingonyama-zk/icicle/blob/d84ffd2679a4cb8f8d1ac2ad2897bc0b95f4eeeb/examples/c%2B%2B/ntt/example.cu#L1) -## NTT API overview - -```rust -pub fn ntt( - input: &HostOrDeviceSlice, - dir: NTTDir, - cfg: &NTTConfig, - output: &mut HostOrDeviceSlice, -) -> IcicleResult<()> -``` - -`ntt:ntt` expects: - -`input` - buffer to read the inputs of the NTT from.
-`dir` - whether to compute forward or inverse NTT.
-`cfg` - config used to specify extra arguments of the NTT.
-`output` - buffer to write the NTT outputs into. Must be of the same size as input. - -The `input` and `output` buffers can be on device or on host. Being on host means that they will be transferred to device during runtime. - -### NTT Config - -```rust -pub struct NTTConfig<'a, S> { - pub ctx: DeviceContext<'a>, - pub coset_gen: S, - pub batch_size: i32, - pub ordering: Ordering, - are_inputs_on_device: bool, - are_outputs_on_device: bool, - pub is_async: bool, - pub ntt_algorithm: NttAlgorithm, -} -``` - -The `NTTConfig` struct is a configuration object used to specify parameters for an NTT instance. - -#### Fields - -- **`ctx: DeviceContext<'a>`**: Specifies the device context, including the device ID and the stream ID. - -- **`coset_gen: S`**: Defines the coset generator used for coset (i)NTTs. By default, this is set to `S::one()`, indicating that no coset is being used. - -- **`batch_size: i32`**: Determines the number of NTTs to compute in a single batch. The default value is 1, meaning that operations are performed on individual inputs without batching. Batch processing can significantly improve performance by leveraging parallelism in GPU computations. - -- **`ordering: Ordering`**: Controls the ordering of inputs and outputs for the NTT operation. This field can be used to specify decimation strategies (in time or in frequency) and the type of butterfly algorithm (Cooley-Tukey or Gentleman-Sande). The ordering is crucial for compatibility with various algorithmic approaches and can impact the efficiency of the NTT. - -- **`are_inputs_on_device: bool`**: Indicates whether the input data has been preloaded on the device memory. If `false` inputs will be copied from host to device. - -- **`are_outputs_on_device: bool`**: Indicates whether the output data is preloaded in device memory. If `false` outputs will be copied from host to device. If the inputs and outputs are the same pointer NTT will be computed in place. - -- **`is_async: bool`**: Specifies whether the NTT operation should be performed asynchronously. When set to `true`, the NTT function will not block the CPU, allowing other operations to proceed concurrently. Asynchronous execution requires careful synchronization to ensure data integrity and correctness. - -- **`ntt_algorithm: NttAlgorithm`**: Can be one of `Auto`, `Radix2`, `MixedRadix`. -`Auto` will select `Radix 2` or `Mixed Radix` algorithm based on heuristics. -`Radix2` and `MixedRadix` will force the use of an algorithm regardless of the input size or other considerations. You should use one of these options when you know for sure that you want to - - -#### Usage - -Example initialization with default settings: - -```rust -let default_config = NTTConfig::default(); -``` - -Customizing the configuration: - -```rust -let custom_config = NTTConfig { - ctx: custom_device_context, - coset_gen: my_coset_generator, - batch_size: 10, - ordering: Ordering::kRN, - are_inputs_on_device: true, - are_outputs_on_device: true, - is_async: false, - ntt_algorithm: NttAlgorithm::MixedRadix, -}; -``` - ### Ordering The `Ordering` enum defines how inputs and outputs are arranged for the NTT operation, offering flexibility in handling data according to different algorithmic needs or compatibility requirements. It primarily affects the sequencing of data points for the transform, which can influence both performance and the compatibility with certain algorithmic approaches. The available ordering options are: @@ -140,15 +63,6 @@ NTT also supports two different modes `Batch NTT` and `Single NTT` Batch NTT allows you to run many NTTs with a single API call, Single MSM will launch a single MSM computation. -You may toggle between single and batch NTT by simply configure `batch_size` to be larger then 1 in your `NTTConfig`. - -```rust -let mut cfg = ntt::get_default_ntt_config::(); -cfg.batch_size = 10 // your ntt using this config will run in batch mode. -``` - -`batch_size=1` would keep our NTT in single NTT mode. - Deciding weather to use `batch NTT` vs `single NTT` is highly dependent on your application and use case. **Single NTT Mode** @@ -232,9 +146,11 @@ Mixed Radix can reduce the number of stages required to compute for large inputs ### Which algorithm should I choose ? -Radix 2 is faster for small NTTs. A small NTT would be around logN = 16 and batch size 1. Its also more suited for inputs which are power of 2 (e.g., 256, 512, 1024). Radix 2 won't necessarily perform better for smaller `logn` with larger batches. +Both work only on inputs of power of 2 (e.g., 256, 512, 1024). + +Radix 2 is faster for small NTTs. A small NTT would be around logN = 16 and batch size 1. Radix 2 won't necessarily perform better for smaller `logn` with larger batches. -Mixed radix on the other hand better for larger NTTs with larger input sizes which are not necessarily power of 2. +Mixed radix on the other hand works better for larger NTTs with larger input sizes. Performance really depends on logn size, batch size, ordering, inverse, coset, coeff-field and which GPU you are using. diff --git a/docs/docs/icicle/rust-bindings/msm.md b/docs/docs/icicle/rust-bindings/msm.md new file mode 100644 index 000000000..4425ba3c0 --- /dev/null +++ b/docs/docs/icicle/rust-bindings/msm.md @@ -0,0 +1,172 @@ +# MSM + +### Supported curves + +`bls12-377`, `bls12-381`, `bn-254`, `bw6-761`, `grumpkin` + +## Example + +```rust +use icicle_bn254::curve::{CurveCfg, G1Projective, ScalarCfg}; +use icicle_core::{curve::Curve, msm, traits::GenerateRandom}; +use icicle_cuda_runtime::{memory::HostOrDeviceSlice, stream::CudaStream}; + +fn main() { + let size: usize = 1 << 10; // Define the number of points and scalars + + // Generate random points and scalars + println!("Generating random G1 points and scalars for BN254..."); + let points = CurveCfg::generate_random_affine_points(size); + let scalars = ScalarCfg::generate_random(size); + + // Wrap points and scalars in HostOrDeviceSlice for MSM + let points_host = HostOrDeviceSlice::Host(points); + let scalars_host = HostOrDeviceSlice::Host(scalars); + + // Allocate memory on the CUDA device for MSM results + let mut msm_results: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(1).expect("Failed to allocate CUDA memory for MSM results"); + + // Create a CUDA stream for asynchronous execution + let stream = CudaStream::create().expect("Failed to create CUDA stream"); + let mut cfg = msm::MSMConfig::default(); + cfg.ctx.stream = &stream; + cfg.is_async = true; // Enable asynchronous execution + + // Execute MSM on the device + println!("Executing MSM on device..."); + msm::msm(&scalars_host, &points_host, &cfg, &mut msm_results).expect("Failed to execute MSM"); + + // Synchronize CUDA stream to ensure MSM execution is complete + stream.synchronize().expect("Failed to synchronize CUDA stream"); + + // Optionally, move results to host for further processing or printing + println!("MSM execution complete."); +} +``` + +## MSM API Overview + +```rust +pub fn msm( + scalars: &HostOrDeviceSlice, + points: &HostOrDeviceSlice>, + cfg: &MSMConfig, + results: &mut HostOrDeviceSlice>, +) -> IcicleResult<()> +``` + +### Parameters + +- **`scalars`**: A buffer containing the scalar values to be multiplied with corresponding points. +- **`points`**: A buffer containing the points to be multiplied by the scalars. +- **`cfg`**: MSM configuration specifying additional parameters for the operation. +- **`results`**: A buffer where the results of the MSM operations will be stored. + +### MSM Config + +```rust +pub struct MSMConfig<'a> { + pub ctx: DeviceContext<'a>, + points_size: i32, + pub precompute_factor: i32, + pub c: i32, + pub bitsize: i32, + pub large_bucket_factor: i32, + batch_size: i32, + are_scalars_on_device: bool, + pub are_scalars_montgomery_form: bool, + are_points_on_device: bool, + pub are_points_montgomery_form: bool, + are_results_on_device: bool, + pub is_big_triangle: bool, + pub is_async: bool, +} +``` + +- **`ctx: DeviceContext`**: Specifies the device context, device id and the CUDA stream for asynchronous execution. +- **`point_size: i32`**: +- **`precompute_factor: i32`**: Determines the number of extra points to pre-compute for each point, affecting memory footprint and performance. +- **`c: i32`**: The "window bitsize," a parameter controlling the computational complexity and memory footprint of the MSM operation. +- **`bitsize: i32`**: The number of bits of the largest scalar, typically equal to the bit size of the scalar field. +- **`large_bucket_factor: i32`**: Adjusts the algorithm's sensitivity to frequently occurring buckets, useful for non-uniform scalar distributions. +- **`batch_size: i32`**: The number of MSMs to compute in a single batch, for leveraging parallelism. +- **`are_scalars_montgomery_form`**: Set to `true` if scalars are in montgomery form. +- **`are_points_montgomery_form`**: Set to `true` if points are in montgomery form. +- **`are_scalars_on_device: bool`**, **`are_points_on_device: bool`**, **`are_results_on_device: bool`**: Indicate whether the corresponding buffers are on the device memory. +- **`is_big_triangle`**: If `true` MSM will run in Large triangle accumulation if `false` Bucket accumulation will be chosen. Default value: false. +- **`is_async: bool`**: Whether to perform the MSM operation asynchronously. + +### Usage + +The `msm` function is designed to compute the sum of multiple scalar-point multiplications efficiently. It supports both single MSM operations and batched operations for increased performance. The configuration allows for detailed control over the execution environment and performance characteristics of the MSM operation. + +When performing MSM operations, it's crucial to match the size of the `scalars` and `points` arrays correctly and ensure that the `results` buffer is appropriately sized to hold the output. The `MSMConfig` should be set up to reflect the specifics of the operation, including whether the operation should be asynchronous and any device-specific settings. + +## How do I toggle between the supported algorithms? + +When creating your MSM Config you may state which algorithm you wish to use. `is_big_triangle=true` will activate Large triangle accumulation and `is_big_triangle=false` will activate Bucket accumulation. + +```rust +... + +let mut cfg_bls12377 = msm::get_default_msm_config::(); + +// is_big_triangle will determine which algorithm to use +cfg_bls12377.is_big_triangle = true; + +msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap(); +... +``` + +You may reference the rust code [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L54). + + +## How do I toggle between MSM modes? + +Toggling between MSM modes occurs automatically based on the number of results you are expecting from the `msm::msm` function. If you are expecting an array of `msm_results`, ICICLE will automatically split `scalars` and `points` into equal parts and run them as multiple MSMs in parallel. + +```rust +... + +let mut msm_result: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap(); +msm::msm(&scalars, &points, &cfg, &mut msm_result).unwrap(); + +... +``` + +In the example above we allocate a single expected result which the MSM method will interpret as `batch_size=1` and run a single MSM. + + +In the next example, we are expecting 10 results which sets `batch_size=10` and runs 10 MSMs in batch mode. + +```rust +... + +let mut msm_results: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(10).unwrap(); +msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap(); + +... +``` + +Here is a [reference](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L108) to the code which automatically sets the batch size. For more MSM examples have a look [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/examples/rust/msm/src/main.rs#L1). + +## Support for G2 group + +MSM also supports G2 group. + +Using MSM in G2 requires a G2 config, and of course your Points should also be G2 Points. + +```rust +... + +let scalars = HostOrDeviceSlice::Host(upper_scalars[..size].to_vec()); +let g2_points = HostOrDeviceSlice::Host(g2_upper_points[..size].to_vec()); +let mut g2_msm_results: HostOrDeviceSlice<'_, G2Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap(); +let mut g2_cfg = msm::get_default_msm_config::(); + +msm::msm(&scalars, &g2_points, &g2_cfg, &mut g2_msm_results).unwrap(); + +... +``` + +Here you can [find an example](https://github.com/ingonyama-zk/icicle/blob/5a96f9937d0a7176d88c766bd3ef2062b0c26c37/examples/rust/msm/src/main.rs#L114) of MSM on G2 Points. diff --git a/docs/docs/icicle/rust-bindings/ntt.md b/docs/docs/icicle/rust-bindings/ntt.md new file mode 100644 index 000000000..a1031ed91 --- /dev/null +++ b/docs/docs/icicle/rust-bindings/ntt.md @@ -0,0 +1,195 @@ +# NTT + +### Supported curves + +`bls12-377`, `bls12-381`, `bn-254`, `bw6-761` + +## Example + +```rust +use icicle_bn254::curve::{ScalarCfg, ScalarField}; +use icicle_core::{ntt::{self, NTT}, traits::GenerateRandom}; +use icicle_cuda_runtime::{device_context::DeviceContext, memory::HostOrDeviceSlice, stream::CudaStream}; + +fn main() { + let size = 1 << 12; // Define the size of your input, e.g., 2^10 + + let icicle_omega = ::get_root_of_unity( + size.try_into() + .unwrap(), + ) + + // Generate random inputs + println!("Generating random inputs..."); + let scalars = HostOrDeviceSlice::Host(ScalarCfg::generate_random(size)); + + // Allocate memory on CUDA device for NTT results + let mut ntt_results: HostOrDeviceSlice<'_, ScalarField> = HostOrDeviceSlice::cuda_malloc(size).expect("Failed to allocate CUDA memory"); + + // Create a CUDA stream + let stream = CudaStream::create().expect("Failed to create CUDA stream"); + let ctx = DeviceContext::default(); // Assuming default device context + ScalarCfg::initialize_domain(ScalarField::from_ark(icicle_omega), &ctx).unwrap(); + + // Configure NTT + let mut cfg = ntt::NTTConfig::default(); + cfg.ctx.stream = &stream; + cfg.is_async = true; // Set to true for asynchronous execution + + // Execute NTT on device + println!("Executing NTT on device..."); + ntt::ntt(&scalars, ntt::NTTDir::kForward, &cfg, &mut ntt_results).expect("Failed to execute NTT"); + + // Synchronize CUDA stream to ensure completion + stream.synchronize().expect("Failed to synchronize CUDA stream"); + + // Optionally, move results to host for further processing or verification + println!("NTT execution complete."); +} +``` + +## NTT API overview + +```rust +pub fn ntt( + input: &HostOrDeviceSlice, + dir: NTTDir, + cfg: &NTTConfig, + output: &mut HostOrDeviceSlice, +) -> IcicleResult<()> +``` + +`ntt:ntt` expects: + +`input` - buffer to read the inputs of the NTT from.
+`dir` - whether to compute forward or inverse NTT.
+`cfg` - config used to specify extra arguments of the NTT.
+`output` - buffer to write the NTT outputs into. Must be of the same size as input. + +The `input` and `output` buffers can be on device or on host. Being on host means that they will be transferred to device during runtime. + + +### NTT Config + +```rust +pub struct NTTConfig<'a, S> { + pub ctx: DeviceContext<'a>, + pub coset_gen: S, + pub batch_size: i32, + pub ordering: Ordering, + are_inputs_on_device: bool, + are_outputs_on_device: bool, + pub is_async: bool, + pub ntt_algorithm: NttAlgorithm, +} +``` + +The `NTTConfig` struct is a configuration object used to specify parameters for an NTT instance. + +#### Fields + +- **`ctx: DeviceContext<'a>`**: Specifies the device context, including the device ID and the stream ID. + +- **`coset_gen: S`**: Defines the coset generator used for coset (i)NTTs. By default, this is set to `S::one()`, indicating that no coset is being used. + +- **`batch_size: i32`**: Determines the number of NTTs to compute in a single batch. The default value is 1, meaning that operations are performed on individual inputs without batching. Batch processing can significantly improve performance by leveraging parallelism in GPU computations. + +- **`ordering: Ordering`**: Controls the ordering of inputs and outputs for the NTT operation. This field can be used to specify decimation strategies (in time or in frequency) and the type of butterfly algorithm (Cooley-Tukey or Gentleman-Sande). The ordering is crucial for compatibility with various algorithmic approaches and can impact the efficiency of the NTT. + +- **`are_inputs_on_device: bool`**: Indicates whether the input data has been preloaded on the device memory. If `false` inputs will be copied from host to device. + +- **`are_outputs_on_device: bool`**: Indicates whether the output data is preloaded in device memory. If `false` outputs will be copied from host to device. If the inputs and outputs are the same pointer NTT will be computed in place. + +- **`is_async: bool`**: Specifies whether the NTT operation should be performed asynchronously. When set to `true`, the NTT function will not block the CPU, allowing other operations to proceed concurrently. Asynchronous execution requires careful synchronization to ensure data integrity and correctness. + +- **`ntt_algorithm: NttAlgorithm`**: Can be one of `Auto`, `Radix2`, `MixedRadix`. +`Auto` will select `Radix 2` or `Mixed Radix` algorithm based on heuristics. +`Radix2` and `MixedRadix` will force the use of an algorithm regardless of the input size or other considerations. You should use one of these options when you know for sure that you want to + + +#### Usage + +Example initialization with default settings: + +```rust +let default_config = NTTConfig::default(); +``` + +Customizing the configuration: + +```rust +let custom_config = NTTConfig { + ctx: custom_device_context, + coset_gen: my_coset_generator, + batch_size: 10, + ordering: Ordering::kRN, + are_inputs_on_device: true, + are_outputs_on_device: true, + is_async: false, + ntt_algorithm: NttAlgorithm::MixedRadix, +}; +``` + + +### Modes + +NTT supports two different modes `Batch NTT` and `Single NTT` + +You may toggle between single and batch NTT by simply configure `batch_size` to be larger then 1 in your `NTTConfig`. + +```rust +let mut cfg = ntt::get_default_ntt_config::(); +cfg.batch_size = 10 // your ntt using this config will run in batch mode. +``` + +`batch_size=1` would keep our NTT in single NTT mode. + +Deciding weather to use `batch NTT` vs `single NTT` is highly dependent on your application and use case. + +### Initializing the NTT Domain + +Before performing NTT operations, its necessary to initialize the NTT domain, It only needs to be called once per GPU since the twiddles are cached. + +```rust +ScalarCfg::initialize_domain(ScalarField::from_ark(icicle_omega), &ctx).unwrap(); +``` + +### `initialize_domain` + +```rust +pub fn initialize_domain(primitive_root: F, ctx: &DeviceContext) -> IcicleResult<()> +where + F: FieldImpl, + ::Config: NTT; +``` + +#### Parameters + +- **`primitive_root`**: The primitive root of unity, chosen based on the maximum NTT size required for the computations. It must be of an order that is a power of two. This root is used to generate twiddle factors that are essential for the NTT operations. + +- **`ctx`**: A reference to a `DeviceContext` specifying which device and stream the computation should be executed on. + +#### Returns + +- **`IcicleResult<()>`**: Will return an error if the operation fails. + +### `initialize_domain_fast_twiddles_mode` + +Similar to `initialize_domain`, `initialize_domain_fast_twiddles_mode` is a faster implementation and can be used for larger NTTs. + +```rust +pub fn initialize_domain_fast_twiddles_mode(primitive_root: F, ctx: &DeviceContext) -> IcicleResult<()> +where + F: FieldImpl, + ::Config: NTT; +``` + +#### Parameters + +- **`primitive_root`**: The primitive root of unity, chosen based on the maximum NTT size required for the computations. It must be of an order that is a power of two. This root is used to generate twiddle factors that are essential for the NTT operations. + +- **`ctx`**: A reference to a `DeviceContext` specifying which device and stream the computation should be executed on. + +#### Returns + +- **`IcicleResult<()>`**: Will return an error if the operation fails. diff --git a/docs/sidebars.js b/docs/sidebars.js index 943d20b20..a0e75bd47 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -25,9 +25,30 @@ module.exports = { id: "icicle/integrations" }, { - type: "doc", + type: "category", label: "Golang bindings", - id: "icicle/golang-bindings", + link: { + type: `doc`, + id: "icicle/golang-bindings", + }, + collapsed: true, + items: [ + { + type: "doc", + label: "MSM", + id: "icicle/golang-bindings/msm", + }, + { + type: "doc", + label: "NTT", + id: "icicle/golang-bindings/ntt", + }, + { + type: "doc", + label: "Vector operations", + id: "icicle/golang-bindings/vec-ops", + }, + ] }, { type: "category", @@ -40,15 +61,25 @@ module.exports = { items: [ { type: "doc", - label: "Multi GPU Support", - id: "icicle/rust-bindings/multi-gpu", + label: "MSM", + id: "icicle/rust-bindings/msm", + }, + { + type: "doc", + label: "NTT", + id: "icicle/rust-bindings/ntt", }, { type: "doc", label: "Vector operations", id: "icicle/rust-bindings/vec-ops", - } - ] + }, + { + type: "doc", + label: "Multi GPU Support", + id: "icicle/rust-bindings/multi-gpu", + }, + ], }, { type: "category", @@ -66,14 +97,14 @@ module.exports = { }, { type: "doc", - label: "Poseidon Hash", - id: "icicle/primitives/poseidon", + label: "NTT", + id: "icicle/primitives/ntt", }, { type: "doc", - label: "NTT", - id: "icicle/primitives/ntt", - } + label: "Poseidon Hash", + id: "icicle/primitives/poseidon", + }, ], }, {