Skip to content

Commit

Permalink
Merge pull request #410 from gizatechxyz/develop
Browse files Browse the repository at this point in the history
Merge Develop into Main
  • Loading branch information
raphaelDkhn authored Oct 26, 2023
2 parents bd07d6d + 87f0e2d commit e67dc3d
Show file tree
Hide file tree
Showing 1,841 changed files with 5,063 additions and 3,208 deletions.
9 changes: 9 additions & 0 deletions .all-contributorsrc
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,15 @@
"contributions": [
"code"
]
},
{
"login": "TsBauer",
"name": "Thomas S. Bauer",
"avatar_url": "https://avatars.githubusercontent.com/u/25390947?v=4",
"profile": "https://brilliantblocks.io",
"contributions": [
"code"
]
}
],
"contributorsPerLine": 7,
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ jobs:
- uses: actions/checkout@v3
- uses: software-mansion/setup-scarb@v1
with:
scarb-version: "0.7.0"
scarb-version: "2.3.0"
- run: scarb test --workspace && scarb fmt --workspace
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

# Orion: An Open-source Framework for Validity and ZK ML ✨
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-19-orange.svg?style=flat-square)](#contributors-)
[![All Contributors](https://img.shields.io/badge/all_contributors-20-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->

Orion is an open-source, community-driven framework dedicated to Provable Machine Learning. It provides essential components and a new ONNX runtime for building verifiable Machine Learning models using [STARKs](https://starkware.co/stark/).
Expand Down Expand Up @@ -92,6 +92,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<td align="center" valign="top" width="14.28%"><a href="https://github.com/chachaleo"><img src="https://avatars.githubusercontent.com/u/49371958?v=4?s=100" width="100px;" alt="Charlotte"/><br /><sub><b>Charlotte</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=chachaleo" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/0xfulanito"><img src="https://avatars.githubusercontent.com/u/145947367?v=4?s=100" width="100px;" alt="0xfulanito"/><br /><sub><b>0xfulanito</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=0xfulanito" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/0x73e"><img src="https://avatars.githubusercontent.com/u/132935850?v=4?s=100" width="100px;" alt="0x73e"/><br /><sub><b>0x73e</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=0x73e" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://brilliantblocks.io"><img src="https://avatars.githubusercontent.com/u/25390947?v=4?s=100" width="100px;" alt="Thomas S. Bauer"/><br /><sub><b>Thomas S. Bauer</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=TsBauer" title="Code">💻</a></td>
</tr>
</tbody>
</table>
Expand Down
20 changes: 20 additions & 0 deletions Scarb.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Code generated by scarb DO NOT EDIT.
version = 1

[[package]]
name = "alexandria_data_structures"
version = "0.1.0"
source = "git+https://github.com/keep-starknet-strange/alexandria.git?rev=f37d73d#f37d73d8a8248e4d8dc65de3949333e30bda022f"

[[package]]
name = "cubit"
version = "1.2.0"
source = "git+https://github.com/raphaelDkhn/cubit.git#e6331ebf98c5d5f442a0e5edefe0b367c8e270d9"

[[package]]
name = "orion"
version = "0.1.2"
dependencies = [
"alexandria_data_structures",
"cubit",
]
9 changes: 3 additions & 6 deletions Scarb.toml
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
[package]
name = "orion"
version = "0.1.2"
version = "0.1.5"
description = "ONNX Runtime in Cairo for verifiable ML inference using STARK"
homepage = "https://github.com/gizatechxyz/orion"

[dependencies]
alexandria_data_structures = { git = "https://github.com/keep-starknet-strange/alexandria.git", rev = "f37d73d" }
cubit = { git = "https://github.com/influenceth/cubit.git" }
cubit = { git = "https://github.com/raphaelDkhn/cubit.git" }

[scripts]
sierra = "cairo-compile . -r"
docgen = "cd docgen && cargo run"
nodegen = "python3 nodegen/node/__init__.py"

[workspace]
members = ["tests/"]
nodegen = "python3 nodegen/node/__init__.py"
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@
* [tensor.clip](framework/operators/tensor/tensor.clip.md)
* [tensor.identity](framework/operators/tensor/tensor.identity.md)
* [tensor.and](framework/operators/tensor/tensor.and.md)
* [tensor.where](framework/operators/tensor/tensor.where.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
3 changes: 2 additions & 1 deletion docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,5 +66,6 @@ You can see below the list of current supported ONNX Operators:
| [Xor](operators/tensor/tensor.xor.md) | :white\_check\_mark: |
| [Or](operators/tensor/tensor.or.md) | :white\_check\_mark: |
| [Gemm](operators/neural-network/nn.gemm.md) | :white\_check\_mark: |
| [Where](operators/tensor/tensor.where.md) | :white\_check\_mark: |

Current Operators support: **60/156 (38%)**
Current Operators support: **61/156 (39%)**
2 changes: 1 addition & 1 deletion docs/framework/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
In this section, we will guide you to start using Orion successfully. We will help you install Cairo 1.0 and add Orion dependency in your project.

{% hint style="info" %}
Orion supports <mark style="color:orange;">**Cairo v2.2.0**</mark> and <mark style="color:orange;">**Scarb 0.7.0**</mark>
Orion supports <mark style="color:orange;">**Cairo and Scarb v2.3.0**</mark>
{% endhint %}

## 📦 Installations
Expand Down
1 change: 1 addition & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.clip`](tensor.clip.md) | Clip operator limits the given input within an interval. |
| [`tensor.and`](tensor.and.md) | Computes the logical AND of two tensors element-wise. |
| [`tensor.identity`](tensor.identity.md) | Return a Tensor with the same shape and contents as input. |
| [`tensor.where`](tensor.where.md) | Return elements chosen from x or y depending on condition. |

## Arithmetic Operations

Expand Down
48 changes: 48 additions & 0 deletions docs/framework/operators/tensor/tensor.where.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#tensor.where

```rust
fn where(self: @Tensor<T>, x: @Tensor<T>, y: @Tensor<T>) -> Tensor<T>;
```

Computes a new tensor by selecting values from tensor x (resp. y) at
indices where the condition is 1 (resp. 0).

## Args

* `self`(`@Tensor<T>`) - The condition tensor
* `x`(`@Tensor<T>`) - The first input tensor
* `y`(`@Tensor<T>`) - The second input tensor

## Panics

* Panics if the shapes are not equal or broadcastable

## Returns

Return a new `Tensor<T>` of the same shape as the input with elements
chosen from x or y depending on the condition.

## Example

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn where_example() -> Tensor<u32> {
let tensor_cond = TensorTrait::<u32>::new(
shape: array![2, 2].span(), data: array![0, 1, 0, 1].span(),
);

let tensor_x = TensorTrait::<u32>::new(
shape: array![2, 2].span(), data: array![2, 4, 6, 8].span(),
);

let tensor_y = TensorTrait::<u32>::new(
shape: array![2, 2].span(), data: array![1, 3, 5, 9].span(),
);

return tensor_cond.where(@tensor_1, @tensor_2);
}
>>> [1,4,5,8]
```
207 changes: 207 additions & 0 deletions nodegen/node/where.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_node, make_test, to_fp, Tensor, Dtype, FixedImpl


class Where(RunAll):

@staticmethod
def where_u32():
def default():
cond = np.random.choice([1, 0], (3, 3, 3)).astype(np.uint32)
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.uint32)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.uint32)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.U32, cond.shape, cond.flatten())
x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "where_u32"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

def broadcast():
cond = np.random.choice([1, 0], (1, 1)).astype(np.uint32)
x = np.random.randint(0, 6, (2, 2)).astype(np.uint32)
y = np.random.randint(0, 6, (1, 2)).astype(np.uint32)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.U32, cond.shape, cond.flatten())
x = Tensor(Dtype.U32, x.shape, x.flatten())
y = Tensor(Dtype.U32, y.shape, y.flatten())
z = Tensor(Dtype.U32, z.shape, z.flatten())

name = "where_u32_broadcast"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

default()
broadcast()

@staticmethod
def where_i32():
def default():
cond = np.random.choice([1, 0], (3, 3, 3)).astype(np.int32)
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.int32)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.int32)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.I32, cond.shape, cond.flatten())
x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())
z = Tensor(Dtype.I32, z.shape, z.flatten())

name = "where_i32"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

def broadcast():
cond = np.random.choice([1, 0], (1, 1)).astype(np.int32)
x = np.random.randint(0, 6, (2, 2)).astype(np.int32)
y = np.random.randint(0, 6, (1, 2)).astype(np.int32)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.I32, cond.shape, cond.flatten())
x = Tensor(Dtype.I32, x.shape, x.flatten())
y = Tensor(Dtype.I32, y.shape, y.flatten())
z = Tensor(Dtype.I32, z.shape, z.flatten())

name = "where_i32_broadcast"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

default()
broadcast()

@staticmethod
def where_i8():
def default():
cond = np.random.choice([1, 0], (3, 3, 3)).astype(np.int8)
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.int8)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.int8)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.I8, cond.shape, cond.flatten())
x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())
z = Tensor(Dtype.I8, z.shape, z.flatten())

name = "where_i8"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

def broadcast():
cond = np.random.choice([1, 0], (1, 1)).astype(np.int8)
x = np.random.randint(0, 6, (2, 2)).astype(np.int8)
y = np.random.randint(0, 6, (1, 2)).astype(np.int8)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.I8, cond.shape, cond.flatten())
x = Tensor(Dtype.I8, x.shape, x.flatten())
y = Tensor(Dtype.I8, y.shape, y.flatten())
z = Tensor(Dtype.I8, z.shape, z.flatten())

name = "where_i8_broadcast"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

default()
broadcast()

@staticmethod
def where_fp8x23():
def default():
cond = np.random.choice([1, 0], (3, 3, 3)).astype(np.float64)
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.float64)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.float64)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.FP8x23, cond.shape, to_fp(
cond.flatten(), FixedImpl.FP8x23))
x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))
z = Tensor(Dtype.FP8x23, z.shape, to_fp(
z.flatten(), FixedImpl.FP8x23))

name = "where_fp8x23"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

def broadcast():
cond = np.random.choice([1, 0], (1, 1)).astype(np.float64)
x = np.random.randint(0, 6, (2, 2)).astype(np.float64)
y = np.random.randint(0, 6, (1, 2)).astype(np.float64)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.FP8x23, cond.shape, to_fp(
cond.flatten(), FixedImpl.FP8x23))
x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))
z = Tensor(Dtype.FP8x23, z.shape, to_fp(
z.flatten(), FixedImpl.FP8x23))

name = "where_fp8x23_broadcast"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

default()
broadcast()

@staticmethod
def where_fp16x16():
def default():
cond = np.random.choice([1, 0], (3, 3, 3)).astype(np.float64)
x = np.random.randint(0, 6, (3, 3, 3)).astype(np.float64)
y = np.random.randint(0, 6, (3, 3, 3)).astype(np.float64)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.FP16x16, cond.shape, to_fp(
cond.flatten(), FixedImpl.FP16x16))
x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))
z = Tensor(Dtype.FP16x16, z.shape, to_fp(
z.flatten(), FixedImpl.FP16x16))

name = "where_fp16x16"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

def broadcast():
cond = np.random.choice([1, 0], (1, 1)).astype(np.float64)
x = np.random.randint(0, 6, (2, 2)).astype(np.float64)
y = np.random.randint(0, 6, (1, 2)).astype(np.float64)

z = np.where(cond, x, y).astype(x.dtype)

cond = Tensor(Dtype.FP16x16, cond.shape, to_fp(
cond.flatten(), FixedImpl.FP16x16))
x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))
z = Tensor(Dtype.FP16x16, z.shape, to_fp(
z.flatten(), FixedImpl.FP16x16))

name = "where_fp16x16_broadcast"
make_node([cond, x, y], [z], name)
make_test([cond, x, y], z, "input_0.where(@input_1,@input_2)", name)

default()
broadcast()
2 changes: 1 addition & 1 deletion src/lib.cairo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
mod operators;
mod numbers;
mod utils;

mod test_helper;
Loading

0 comments on commit e67dc3d

Please sign in to comment.