-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
introduce _to_dim_order_copy op to runtime #1970
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/1970
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 7090428 with merge base 53078c4 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D53747744 |
This pull request was exported from Phabricator. Differential Revision: D53747744 |
175d288
to
892098b
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
892098b
to
6d60a22
Compare
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
Summary: Pull Request resolved: pytorch#1970 Differential Revision: https://internalfb.com/D53747744
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
6d60a22
to
1ba0fa6
Compare
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
1ba0fa6
to
0238c2c
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
0238c2c
to
001c37f
Compare
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
001c37f
to
f6d1369
Compare
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
Summary: Pull Request resolved: pytorch#1970 Differential Revision: D53747744
Summary: bypass-github-export-checks Differential Revision: D53747744
f6d1369
to
beaa4db
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
c8046cc
to
edbc9ad
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Differential Revision: https://internalfb.com/D53747744
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
edbc9ad
to
a874034
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Differential Revision: https://internalfb.com/D53747744
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
a874034
to
c7cc8f4
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
c7cc8f4
to
d889fd7
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: Pull Request resolved: pytorch#1970 This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Differential Revision: https://internalfb.com/D53747744
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
d889fd7
to
7090428
Compare
This pull request was exported from Phabricator. Differential Revision: D53747744 |
Summary: This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Reviewed By: larryliu0820 Differential Revision: D53747744
Summary: Pull Request resolved: pytorch#1970 This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Differential Revision: https://internalfb.com/D53747744
Summary: Pull Request resolved: pytorch#1970 This diff creates a new and special operator, `_to_dim_order_copy`. This new operator introduces two critical attributes to runtime system: 1. Extract memory_format information from tensor based on dim_order, instead of stride. 2. Support both channal_last and contiguous memory_format in runtime. Please note that memory format here is a parallel concept with memory layout, and supporting new format does not violate our contract on only supporting contiguous memory layout tensor. Details can be found in [here](https://discuss.pytorch.org/t/contigious-vs-non-contigious-tensor/30107) and [here](https://pytorch.org/blog/tensor-memory-format-matters/). Furthermore, dim order is a specifial operator, which does not have a native aten variant but is needed by most model in edge dialect, so it can not directly be put in `kernels/portable/custom_ops.yaml` (need manually registered everytime, not work for an operator needed by many models), or `kernels/portable/functions.yaml` (should have native aten variant). To overcome that, this diff puts `_to_dim_order_copy`'s aten mode under `kernels/aten`, while lean mode under `kernels/portable/functions.yaml`. Also update dependencies and utils. Differential Revision: https://internalfb.com/D53747744
This pull request has been merged in cf40792. |
Summary: When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example: ``` compile_config=EdgeCompileConfig( _check_ir_validity=False, _skip_dim_order=True, ), ``` The newer _to_dim_order_copy op was added in D53747744/pytorch#1970. Adding a param to the pass init to determine if we use the newer op or not. Differential Revision: D58395616
Summary: Pull Request resolved: pytorch#3968 When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example: ``` compile_config=EdgeCompileConfig( _check_ir_validity=False, _skip_dim_order=True, ), ``` The newer _to_dim_order_copy op was added in D53747744/pytorch#1970. Adding a param to the pass init to determine if we use the newer op or not. Reviewed By: copyrightly Differential Revision: D58395616
Summary: Pull Request resolved: pytorch#3968 When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example: ``` compile_config=EdgeCompileConfig( _check_ir_validity=False, _skip_dim_order=True, ), ``` The newer _to_dim_order_copy op was added in D53747744/pytorch#1970. Adding a param to the pass init to determine if we use the newer op or not. Reviewed By: copyrightly Differential Revision: D58395616
Summary: Pull Request resolved: #3968 When converting an ExportedProgram to EdgeProgramManager via to_edge, _to_dim_order_copy or _to_copy will be used depending on the passed EdgeCompileConfig having _skip_dim_order set to true or false. For example: ``` compile_config=EdgeCompileConfig( _check_ir_validity=False, _skip_dim_order=True, ), ``` The newer _to_dim_order_copy op was added in D53747744/#1970. Adding a param to the pass init to determine if we use the newer op or not. Reviewed By: copyrightly Differential Revision: D58395616 fbshipit-source-id: 8f5289f8ed62a5c3814ec32b91fb055a10944fdb
Differential Revision: D53747744