Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce _to_dim_order_copy op to runtime #1970

Closed
wants to merge 1 commit into from

Conversation

Gasoonjia
Copy link
Contributor

Differential Revision: D53747744

Copy link

pytorch-bot bot commented Feb 14, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/1970

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 7090428 with merge base 53078c4 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 14, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Feb 24, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request Feb 29, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Feb 29, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 1, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 1, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 1, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 4, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 4, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 4, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 4, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 4, 2024
Summary: Pull Request resolved: pytorch#1970

Differential Revision: D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request Mar 5, 2024
Summary:
bypass-github-export-checks

Differential Revision: D53747744
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
@Gasoonjia Gasoonjia force-pushed the export-D53747744 branch from c8046cc to edbc9ad Compare May 3, 2024 18:03
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:
Pull Request resolved: pytorch#1970

This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Differential Revision: https://internalfb.com/D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
@Gasoonjia Gasoonjia force-pushed the export-D53747744 branch from edbc9ad to a874034 Compare May 3, 2024 18:34
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:
Pull Request resolved: pytorch#1970

This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Differential Revision: https://internalfb.com/D53747744
Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
@Gasoonjia Gasoonjia force-pushed the export-D53747744 branch from a874034 to c7cc8f4 Compare May 3, 2024 18:58
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
@Gasoonjia Gasoonjia force-pushed the export-D53747744 branch from c7cc8f4 to d889fd7 Compare May 3, 2024 19:01
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:
Pull Request resolved: pytorch#1970

This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Differential Revision: https://internalfb.com/D53747744
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
@Gasoonjia Gasoonjia force-pushed the export-D53747744 branch from d889fd7 to 7090428 Compare May 3, 2024 20:56
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D53747744

Gasoonjia added a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:


This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Reviewed By: larryliu0820

Differential Revision: D53747744
Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request May 3, 2024
Summary:
Pull Request resolved: pytorch#1970

This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Differential Revision: https://internalfb.com/D53747744
Gasoonjia pushed a commit to Gasoonjia/executorch-1 that referenced this pull request May 4, 2024
Summary:
Pull Request resolved: pytorch#1970

This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system:
1. Extract memory_format information from tensor based on dim_order, instead of stride.
2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/).

Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts   `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`.

Also update dependencies and utils.

Differential Revision: https://internalfb.com/D53747744
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in cf40792.

nathanaelsee added a commit to nathanaelsee/executorch that referenced this pull request Jun 12, 2024
Summary:
When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example:
```
compile_config=EdgeCompileConfig(
    _check_ir_validity=False,
    _skip_dim_order=True,
),
```
The newer _to_dim_order_copy op was added in D53747744/pytorch#1970.

Adding a param to the pass init to determine if we use the newer op or not.

Differential Revision: D58395616
nathanaelsee added a commit to nathanaelsee/executorch that referenced this pull request Jun 12, 2024
Summary:
Pull Request resolved: pytorch#3968

When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example:
```
compile_config=EdgeCompileConfig(
    _check_ir_validity=False,
    _skip_dim_order=True,
),
```
The newer _to_dim_order_copy op was added in D53747744/pytorch#1970.

Adding a param to the pass init to determine if we use the newer op or not.

Reviewed By: copyrightly

Differential Revision: D58395616
nathanaelsee added a commit to nathanaelsee/executorch that referenced this pull request Jun 13, 2024
Summary:
Pull Request resolved: pytorch#3968

When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example:
```
compile_config=EdgeCompileConfig(
    _check_ir_validity=False,
    _skip_dim_order=True,
),
```
The newer _to_dim_order_copy op was added in D53747744/pytorch#1970.

Adding a param to the pass init to determine if we use the newer op or not.

Reviewed By: copyrightly

Differential Revision: D58395616
facebook-github-bot pushed a commit that referenced this pull request Jun 13, 2024
Summary:
Pull Request resolved: #3968

When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example:
```
compile_config=EdgeCompileConfig(
    _check_ir_validity=False,
    _skip_dim_order=True,
),
```
The newer _to_dim_order_copy op was added in D53747744/#1970.

Adding a param to the pass init to determine if we use the newer op or not.

Reviewed By: copyrightly

Differential Revision: D58395616

fbshipit-source-id: 8f5289f8ed62a5c3814ec32b91fb055a10944fdb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants