Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: global maxpool operator #584

Open
wants to merge 7 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .all-contributorsrc
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,15 @@
"code"
]
},
{
"login": "RajeshRk18",
"name": "Rajesh",
"avatar_url": "https://avatars.githubusercontent.com/u/87425610?v=4",
"profile": "https://github.com/RajeshRk18",
"contributions": [
"code"
]
},
{
"login": "TAdev0",
"name": "Tristan",
Expand Down
9 changes: 8 additions & 1 deletion docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,13 @@
All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [Unreleased] - 2024-02-24

## Added
- Global Maxpool operator

## [Unreleased] - 2024-02-21

## Added
Expand All @@ -12,7 +19,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),

## Added
- Scatter Nd Operator.
-

## [Unreleased] - 2023-12-25

## Added
Expand Down
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,7 @@
* [nn.thresholded\_relu](framework/operators/neural-network/nn.thresholded\_relu.md)
* [nn.gemm](framework/operators/neural-network/nn.gemm.md)
* [nn.grid\_sample](framework/operators/neural-network/nn.grid\_sample.md)
* [nn.global_maxpool](framework/operators/neural-network/nn.global_maxpool.md)
* [nn.col2im](framework/operators/neural-network/nn.col2im.md)
* [nn.conv_transpose](framework/operators/neural-network/nn.conv\_transpose.md)
* [nn.conv](framework/operators/neural-network/nn.conv.md)
Expand Down
1 change: 1 addition & 0 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ You can see below the list of current supported ONNX Operators:
| [HammingWindow](operators/tensor/tensor.tensor.hamming_window.md) | :white\_check\_mark: |
| [BlackmanWindow](operators/tensor/tensor.tensor.blackman_window.md) | :white\_check\_mark: |
| [RandomUniformLike](operators/tensor/tensor.tensor.random_uniform_like.md) | :white\_check\_mark: |
| [GlobalMaxpool](operators/neural-network/nn.global_maxpool.md) | :white\_check\_mark: |
| [LabelEncoder](operators/tensor/tensor.label_encoder.md) | :white\_check\_mark: |

Current Operators support: **118/156 (75%)**
6 changes: 3 additions & 3 deletions docs/framework/operators/neural-network/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,6 @@ Orion supports currently these `NN` types.
| [`nn.gemm`](nn.gemm.md) | Performs General Matrix multiplication. |
| [`nn.grid_sample`](nn.grid\_sample.md) | Computes the grid sample of the input tensor and input grid. |
| [`nn.col2im`](nn.col2im.md) | Rearranges column blocks back into a multidimensional image |
| [`nn.conv_transpose`](nn.conv\_transpose.md) | Performs the convolution transpose of the input data tensor and weight tensor. |
| [`nn.conv`](nn.conv.md) | Performs the convolution of the input data tensor and weight tensor. |

| [`nn.conv_transpose`](nn.conv\_transpose.md) | Performs the convolution transpose of the input data tensor and weigth tensor. |
| [`nn.conv`](nn.conv.md) | Performs the convolution of the input data tensor and weigth tensor. |
| [`nn.global_maxpool`](nn.global_maxpool.md) | Computes the global maxpooling of the input tensor. |
61 changes: 61 additions & 0 deletions docs/framework/operators/neural-network/nn.global_maxpool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#NNTrait::global_maxpool

```rust
fn global_maxpool(
X: @Tensor<T>,
) -> Tensor<T>;
```

Given an input tensor X, cmputes the global maxpooling.

## Args

* `X`(`@Tensor<T>`) - Input tensor of shape (N, C, H, W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.

## Returns

A `Tensor<T>` of shape (N, C, 1, 1). For non-image case, a `Tensor<T>` of shape (N, C, 1,..,1_n).

## Example

```rust
use orion::operators::nn::NNTrait;
use orion::operators::tensor::{Tensor, TensorTrait};

fn example_global_maxpool() -> Tensor<u32> {

let mut shape = ArrayTrait::<usize>::new();
shape.append(1);
shape.append(2);
shape.append(4);
shape.append(2);

let mut data = ArrayTrait::<u32>::new();
data.append(36);
data.append(63);
data.append(57);
data.append(62);
data.append(13);
data.append(87);
data.append(44);
data.append(6);
data.append(35);
data.append(35);
data.append(75);
data.append(63);
data.append(49);
data.append(11);
data.append(45);
data.append(11);

let mut X = TensorTrait::new(shape.span(), data.span());

return NNTrait::global_maxpool(
@X
);
}

>>> [[[87]
[75]]]

````
55 changes: 55 additions & 0 deletions nodegen/node/global_maxpool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_test, to_fp, Tensor, Dtype, FixedImpl, Trait

def global_maxpool(data: np.ndarray) -> np.ndarray:
spatial_shape = np.ndim(data) - 2

result = np.max(data, axis=tuple(range(spatial_shape, spatial_shape + 2)))

# Add singleton dimensions
for _ in range(spatial_shape):
result = np.expand_dims(result, -1)

return result

class Global_maxpool(RunAll):

@staticmethod
# We test here with fp8x23 implementation.
def fp8x23():
# Create a random numpy array:
x = np.random.randint(-3, 3, (2, 2, 4, 4)).astype(np.float64)
# Ddefine the expected result:
y = global_maxpool(x)
# Convert the input and output to Tensor class, similar to Orion's Tensor struct:
x = Tensor(Dtype.FP8x23, x.shape, to_fp(x.flatten(), FixedImpl.FP8x23))
# Convert the floats values in `y` to fixed points with `to_fp` method:
y = Tensor(Dtype.FP8x23, y.shape, to_fp(y.flatten(), FixedImpl.FP8x23))

# Define the name of the generated folder.
name = "global_maxpool_fp8x23"
# Invoke `make_test` method to generate corresponding Cairo tests:
make_test(
[x], # List of input tensors.
y, # The expected output result.
"NNTrait::global_maxpool(@input_0)", # The code signature.
name, # The name of the generated folder.
Trait.NN # The trait, if the function is present in either the TensorTrait or NNTrait.
)

# We test here with fp16x16 implementation.
@staticmethod
def fp16x16():
x = np.random.uniform(-3, 3, (2, 2, 4, 4)).astype(np.float64)
y = global_maxpool(x)

x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))

name = "global_maxpool_fp16x16"
make_test([x], y, "NNTrait::global_maxpool(@input_0)",
name, Trait.NN)

68 changes: 66 additions & 2 deletions src/operators/nn/core.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,10 @@ use orion::operators::tensor::core::Tensor;
/// gemm - Performs General Matrix multiplication.
/// grid_sample - Computes the grid sample of the input tensor and input grid.
/// col2im - Rearranges column blocks back into a multidimensional image
/// conv_transpose - Performs the convolution transpose of the input data tensor and weight tensor.
/// conv - Performs the convolution of the input data tensor and weight tensor.
/// conv_transpose - Performs the convolution transpose of the input data tensor and weigth tensor.
/// conv - Performs the convolution of the input data tensor and weigth tensor.
/// global_maxpool - Performs the global maxpooling of the input tensor.

trait NNTrait<T> {
/// # NNTrait::relu
///
Expand Down Expand Up @@ -1304,4 +1306,66 @@ trait NNTrait<T> {
mode: Option<orion::operators::nn::functional::grid_sample::MODE>,
padding_mode: Option<orion::operators::nn::functional::grid_sample::PADDING_MODE>,
) -> Tensor<T>;
/// #NNTrait::global_maxpool
///
/// ```rust
/// fn global_maxpool(
/// X: @Tensor<T>,
/// ) -> Tensor<T>;
/// ```
///
/// Given an input tensor X, cmputes the global maxpooling.
///
/// ## Args
///
/// * `X`(`@Tensor<T>`) - Input tensor of shape (N, C, H, W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.
///
/// ## Returns
///
/// A `Tensor<T>` of shape (N, C, 1, 1). For non-image case, a `Tensor<T>` of shape (N, C, 1,..,1_n).
///
/// ## Example
///
/// ```rust
/// use orion::operators::nn::NNTrait;
/// use orion::operators::tensor::{Tensor, TensorTrait};
///
/// fn example_global_maxpool() -> Tensor<u32> {
///
/// let mut shape = ArrayTrait::<usize>::new();
/// shape.append(1);
/// shape.append(2);
/// shape.append(4);
/// shape.append(2);
///
/// let mut data = ArrayTrait::<u32>::new();
/// data.append(36);
/// data.append(63);
/// data.append(57);
/// data.append(62);
/// data.append(13);
/// data.append(87);
/// data.append(44);
/// data.append(6);
/// data.append(35);
/// data.append(35);
/// data.append(75);
/// data.append(63);
/// data.append(49);
/// data.append(11);
/// data.append(45);
/// data.append(11);
///
/// let mut X = TensorTrait::new(shape.span(), data.span());
///
/// return NNTrait::global_maxpool(
/// @X
/// );
/// }
///
/// >>> [[[87]
/// [75]]]
///
/// ````
fn global_maxpool(X: @Tensor<T>) -> Tensor<T>;
}
1 change: 1 addition & 0 deletions src/operators/nn/functional.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,4 @@ mod conv_transpose;
mod depth_to_space;
mod space_to_depth;
mod conv;
mod global_maxpool;
64 changes: 64 additions & 0 deletions src/operators/nn/functional/global_maxpool.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
use core::array::ArrayTrait;
use core::array::SpanTrait;
use orion::numbers::NumberTrait;
use core::option::OptionTrait;
use orion::operators::tensor::core::{Tensor, TensorTrait};
use orion::operators::tensor::math::max_in_tensor::max_in_tensor;

/// Cf: NNTrait::global_maxpool docstring
fn global_maxpool<
T,
MAG,
impl TNumber: NumberTrait<T, MAG>,
impl TTensor: TensorTrait<T>,
impl TPartialOrd: PartialOrd<T>,
impl TCopy: Copy<T>,
impl TDrop: Drop<T>
>(
X: @Tensor<T>
) -> Tensor<T> {
assert((*X).shape.len() == 4, 'Must be a 4D tensor');

let mut data = (*X).data;
let mut global_max_vals = ArrayTrait::new();

let mut accum = 0;

let N = (*X).shape.at(0);
let C = (*X).shape.at(1);
let H = (*X).shape.at(2);
let W = (*X).shape.at(3);

// height * width
let mut area = *H * *W;

loop {
let mut sub_tensor = ArrayTrait::new();

loop {
match data.pop_front() {
Option::Some(data) => {
if accum % area == 0 {
break;
} else {
sub_tensor.append(*data);
accum += 1;
}
},
Option::None => { break; },
};
};

let sub_tensor_max = max_in_tensor::<T>(sub_tensor.span());

global_max_vals.append(sub_tensor_max);
};

let singleton_dim: usize = 1;

let result = TensorTrait::new(
array![*N, *C, singleton_dim, singleton_dim].span(), global_max_vals.span()
);

return result;
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp16x16.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -143,4 +143,8 @@ impl FP16x16NN of NNTrait<FP16x16> {
) -> Tensor<FP16x16> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<FP16x16>) -> Tensor<FP16x16> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp32x32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -137,4 +137,8 @@ impl FP32x32NN of NNTrait<FP32x32> {
) -> Tensor<FP32x32> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<FP32x32>) -> Tensor<FP32x32> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp64x64.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -137,4 +137,8 @@ impl FP64x64NN of NNTrait<FP64x64> {
) -> Tensor<FP64x64> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<FP64x64>) -> Tensor<FP64x64> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp8x23.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -139,4 +139,8 @@ impl FP8x23NN of NNTrait<FP8x23> {
) -> Tensor<FP8x23> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<FP8x23>) -> Tensor<FP8x23> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_i32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -130,4 +130,8 @@ impl I32NN of NNTrait<i32> {
) -> Tensor<i32> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<i32>) -> Tensor<i32> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_i8.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -130,4 +130,8 @@ impl I8NN of NNTrait<i8> {
) -> Tensor<i8> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<i8>) -> Tensor<i8> {
functional::global_maxpool::global_maxpool(X)
}
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_u32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -130,4 +130,8 @@ impl U32NN of NNTrait<u32> {
) -> Tensor<u32> {
functional::conv::conv(X, W, B, auto_pad, dilations, group, kernel_shape, pads, strides)
}

fn global_maxpool(X: @Tensor<u32>) -> Tensor<u32> {
functional::global_maxpool::global_maxpool(X)
}
}
Loading
Loading