Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onert] Support ReLU6 for training #12388

Closed
3 tasks done
zetwhite opened this issue Dec 29, 2023 · 3 comments
Closed
3 tasks done

[onert] Support ReLU6 for training #12388

zetwhite opened this issue Dec 29, 2023 · 3 comments
Assignees

Comments

@zetwhite
Copy link
Contributor

zetwhite commented Dec 29, 2023

What

Let's support ReLU6 for training

parent issue : #12325

Task

@zetwhite
Copy link
Contributor Author

Draft : #12395

I checked both fused activation, and stand-alone activation works well.

fused one

model.zip

  • tested env
- learning_rate   = 0.001
- batch_size      = 32
- loss_info       = {loss = mean squared error, reduction = sum over batch size}
- optimizer       = adam 
  • tensorflow
Epoch 1/5
32/32 [==============================] - 0s 947us/step - loss: 0.0518 - mean_squared_error: 0.0518
Epoch 2/5
32/32 [==============================] - 0s 847us/step - loss: 0.0364 - mean_squared_error: 0.0364
Epoch 3/5
32/32 [==============================] - 0s 876us/step - loss: 0.0277 - mean_squared_error: 0.0277
Epoch 4/5
32/32 [==============================] - 0s 898us/step - loss: 0.0223 - mean_squared_error: 0.0223
Epoch 5/5
32/32 [==============================] - 0s 862us/step - loss: 0.0186 - mean_squared_error: 0.0186

  • onert_train
Epoch 1/5 - time: 37.853ms/step - loss: [0] 0.0518
Epoch 2/5 - time: 37.927ms/step - loss: [0] 0.0361
Epoch 3/5 - time: 37.778ms/step - loss: [0] 0.0272
Epoch 4/5 - time: 37.771ms/step - loss: [0] 0.0218
Epoch 5/5 - time: 38.018ms/step - loss: [0] 0.0180

not fused one

model.zip

  • tested env
- learning_rate   = 0.001
- batch_size      = 32
- loss_info       = {loss = mean squared error, reduction = sum over batch size}
- optimizer       = adam 
  • onert_train
Epoch 1/5 - time: 37.706ms/step - loss: [0] 0.0539
Epoch 2/5 - time: 37.724ms/step - loss: [0] 0.0370
Epoch 3/5 - time: 37.751ms/step - loss: [0] 0.0272
Epoch 4/5 - time: 37.816ms/step - loss: [0] 0.0216
Epoch 5/5 - time: 37.829ms/step - loss: [0] 0.0178
  • tensorflow
Epoch 1/5
32/32 [==============================] - 0s 945us/step - loss: 0.0538 - mean_squared_error: 0.0538
Epoch 2/5
32/32 [==============================] - 0s 818us/step - loss: 0.0370 - mean_squared_error: 0.0370
Epoch 3/5
32/32 [==============================] - 0s 813us/step - loss: 0.0273 - mean_squared_error: 0.0273
Epoch 4/5
32/32 [==============================] - 0s 791us/step - loss: 0.0219 - mean_squared_error: 0.0219
Epoch 5/5
32/32 [==============================] - 0s 802us/step - loss: 0.0187 - mean_squared_error: 0.0187

@jyoungyun
Copy link
Contributor

batch_size = 32

How about using a divisive batch_size number? Because onert_train does not support dynamic shape(e.g., batch_size). Thus, compared to the TensorFlow, which uses all dataset, onert_train does not train the remaining datasets.

data length: 1000
batch_size: 32

In this case, onert_train does not the last 8 datasets. (1000 = 32*31 + 8, onert_train has 31 steps in one epoch and TensorFlow has 32 steps in one epoch.)

@zetwhite
Copy link
Contributor Author

Now, onert support ReLU6 on training feature.
So, close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants