-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cadence fusiong3 operators m2 #7490
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7490
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e79b466 with merge base a29dc49 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
bc5b6f0
to
c77741a
Compare
using ::executorch::aten::Scalar; | ||
using ::executorch::aten::ScalarType; | ||
using ::executorch::aten::Tensor; | ||
using ::executorch::runtime::canCast; | ||
using ::executorch::runtime::Error; | ||
using ::executorch::runtime::KernelRuntimeContext; | ||
using exec_aten::Scalar; | ||
using exec_aten::ScalarType; | ||
using exec_aten::Tensor; | ||
using executorch::runtime::canCast; | ||
using torch::executor::Error; | ||
using torch::executor::KernelRuntimeContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to retain the full ones, I'll let @hsharma35 confirm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As part of M1 release, we have used "exec_aten::Scalar" similar to what is available in HiFi. Later, I think, Zonglin in his PR has changed this. We tried to retain this. @hsharma35 and @zonglinpeng please let me know the correct name space to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Please use the latest namespace/headers similar to op_add.cpp in master.
The header/namespaces are using executorch c++ style guide
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hsharma35 - Updated all the operators similar to op_add.cpp in master
using ::executorch::aten::ScalarType; | ||
using ::executorch::aten::Tensor; | ||
using ::executorch::runtime::Error; | ||
using ::executorch::runtime::KernelRuntimeContext; | ||
using exec_aten::Scalar; | ||
using exec_aten::ScalarType; | ||
using exec_aten::Tensor; | ||
using torch::executor::Error; | ||
using torch::executor::KernelRuntimeContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
using ::executorch::aten::Scalar; | ||
using ::executorch::aten::ScalarType; | ||
using ::executorch::aten::Tensor; | ||
using ::executorch::runtime::Error; | ||
using ::executorch::runtime::KernelRuntimeContext; | ||
using exec_aten::Scalar; | ||
using exec_aten::ScalarType; | ||
using exec_aten::Tensor; | ||
using torch::executor::Error; | ||
using torch::executor::KernelRuntimeContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
won't put it on the other files, but same here and all new files I guess
@@ -30,6 +29,15 @@ namespace impl { | |||
namespace G3 { | |||
namespace native { | |||
|
|||
#define XT_KERNEL_CHECK(ctx, out, kernel, ...) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this defined so many times? We should be able to at least move it to a share location?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR creates a separate header for the XT_KERNEL_CHECK macro. Maybe rebase on top of this and include the common header instead? We can cleanup in a different PR too.
#7516
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change done.
@zonglinpeng has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3 For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK" Pull Request resolved: pytorch#7490 Differential Revision: D67870337 Pulled By: zonglinpeng
using exec_aten::Scalar; | ||
using exec_aten::ScalarType; | ||
using exec_aten::Tensor; | ||
using executorch::runtime::canCast; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please undo the changes in headers / using declarations?
(I think this is likely due to a bad rebase)
@@ -30,6 +29,15 @@ namespace impl { | |||
namespace G3 { | |||
namespace native { | |||
|
|||
#define XT_KERNEL_CHECK(ctx, out, kernel, ...) \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR creates a separate header for the XT_KERNEL_CHECK macro. Maybe rebase on top of this and include the common header instead? We can cleanup in a different PR too.
#7516
using Tensor = exec_aten::Tensor; | ||
using ScalarType = exec_aten::ScalarType; | ||
using IntArrayRef = exec_aten::ArrayRef<int64_t>; | ||
using torch::executor::Error; | ||
using torch::executor::KernelRuntimeContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please undo the namespace and header changes.
Summary: Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3 For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK" Pull Request resolved: pytorch#7490 Differential Revision: D67870337 Pulled By: zonglinpeng
…, dequantize, cat, layer norm, softmax to backends/cadence folder. Added operators to backends/cadence folder
Signed-off-by: [email protected] <[email protected]>
…ument checks in the operators using macro
…n to support integer inputs and float output
…ckends/cadence/fusion_g3/operators to solve a comilation error
…in the operators at backends\cadence\fusion_f3\operators folder
897987e
to
b626b7f
Compare
@zonglinpeng has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary
Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3
For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK"