Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cadence fusiong3 operators m2 #7490

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

ckmadhira
Copy link
Contributor

@ckmadhira ckmadhira commented Jan 3, 2025

Summary

Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3
For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK"

Copy link

pytorch-bot bot commented Jan 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7490

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e79b466 with merge base a29dc49 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 3, 2025
@ckmadhira ckmadhira force-pushed the cadence_fusiong3_operators_M2 branch from bc5b6f0 to c77741a Compare January 3, 2025 11:03
Comment on lines 19 to 26
using ::executorch::aten::Scalar;
using ::executorch::aten::ScalarType;
using ::executorch::aten::Tensor;
using ::executorch::runtime::canCast;
using ::executorch::runtime::Error;
using ::executorch::runtime::KernelRuntimeContext;
using exec_aten::Scalar;
using exec_aten::ScalarType;
using exec_aten::Tensor;
using executorch::runtime::canCast;
using torch::executor::Error;
using torch::executor::KernelRuntimeContext;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to retain the full ones, I'll let @hsharma35 confirm

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As part of M1 release, we have used "exec_aten::Scalar" similar to what is available in HiFi. Later, I think, Zonglin in his PR has changed this. We tried to retain this. @hsharma35 and @zonglinpeng please let me know the correct name space to use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Please use the latest namespace/headers similar to op_add.cpp in master.
The header/namespaces are using executorch c++ style guide

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hsharma35 - Updated all the operators similar to op_add.cpp in master

Comment on lines 16 to 19
using ::executorch::aten::ScalarType;
using ::executorch::aten::Tensor;
using ::executorch::runtime::Error;
using ::executorch::runtime::KernelRuntimeContext;
using exec_aten::Scalar;
using exec_aten::ScalarType;
using exec_aten::Tensor;
using torch::executor::Error;
using torch::executor::KernelRuntimeContext;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Comment on lines 18 to 21
using ::executorch::aten::Scalar;
using ::executorch::aten::ScalarType;
using ::executorch::aten::Tensor;
using ::executorch::runtime::Error;
using ::executorch::runtime::KernelRuntimeContext;
using exec_aten::Scalar;
using exec_aten::ScalarType;
using exec_aten::Tensor;
using torch::executor::Error;
using torch::executor::KernelRuntimeContext;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

won't put it on the other files, but same here and all new files I guess

@@ -30,6 +29,15 @@ namespace impl {
namespace G3 {
namespace native {

#define XT_KERNEL_CHECK(ctx, out, kernel, ...) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this defined so many times? We should be able to at least move it to a share location?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR creates a separate header for the XT_KERNEL_CHECK macro. Maybe rebase on top of this and include the common header instead? We can cleanup in a different PR too.
#7516

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change done.

@facebook-github-bot
Copy link
Contributor

@zonglinpeng has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

zonglinpeng pushed a commit to zonglinpeng/executorch that referenced this pull request Jan 6, 2025
Summary:
Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3
For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK"

Pull Request resolved: pytorch#7490

Differential Revision: D67870337

Pulled By: zonglinpeng
using exec_aten::Scalar;
using exec_aten::ScalarType;
using exec_aten::Tensor;
using executorch::runtime::canCast;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please undo the changes in headers / using declarations?
(I think this is likely due to a bad rebase)

@@ -30,6 +29,15 @@ namespace impl {
namespace G3 {
namespace native {

#define XT_KERNEL_CHECK(ctx, out, kernel, ...) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR creates a separate header for the XT_KERNEL_CHECK macro. Maybe rebase on top of this and include the common header instead? We can cleanup in a different PR too.
#7516

Comment on lines 17 to 21
using Tensor = exec_aten::Tensor;
using ScalarType = exec_aten::ScalarType;
using IntArrayRef = exec_aten::ArrayRef<int64_t>;
using torch::executor::Error;
using torch::executor::KernelRuntimeContext;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please undo the namespace and header changes.

zonglinpeng pushed a commit to zonglinpeng/executorch that referenced this pull request Jan 7, 2025
Summary:
Added new operators sub, div, exp, permute, slice, mean in backends/cadence/fusion_g3
For cycle reduction, disabled error checks in operators using macro "OPT_ARG_CHECK"

Pull Request resolved: pytorch#7490

Differential Revision: D67870337

Pulled By: zonglinpeng
@ckmadhira ckmadhira force-pushed the cadence_fusiong3_operators_M2 branch from 897987e to b626b7f Compare January 7, 2025 06:41
@facebook-github-bot
Copy link
Contributor

@zonglinpeng has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants