-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[onert] Support ReLU6 for training #12388
Comments
Draft : #12395 I checked both fused activation, and stand-alone activation works well. fused one
not fused one
|
How about using a divisive batch_size number? Because onert_train does not support dynamic shape(e.g., batch_size). Thus, compared to the TensorFlow, which uses all dataset, onert_train does not train the remaining datasets. data length: 1000 In this case, onert_train does not the last 8 datasets. (1000 = 32*31 + 8, onert_train has 31 steps in one epoch and TensorFlow has 32 steps in one epoch.) |
Now, onert support ReLU6 on training feature. |
What
Let's support ReLU6 for training
parent issue : #12325
Task
- [onert] Extract fused activation backprop to OperationUtils #12492
- [onert] Add ReLU6 grad to OperationUtils #12502
The text was updated successfully, but these errors were encountered: