-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate device type for heterogenous sharding of table across different device types #2606
Conversation
This pull request was exported from Phabricator. Differential Revision: D66682124 |
dcac3ec
to
0284270
Compare
This pull request was exported from Phabricator. Differential Revision: D66682124 |
0284270
to
19153a9
Compare
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Differential Revision: D66682124
This pull request was exported from Phabricator. Differential Revision: D66682124 |
19153a9
to
c383a6d
Compare
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Differential Revision: D66682124
This pull request was exported from Phabricator. Differential Revision: D66682124 |
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Differential Revision: D66682124
c383a6d
to
c006ee4
Compare
This pull request was exported from Phabricator. Differential Revision: D66682124 |
c006ee4
to
8f7b9ae
Compare
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Differential Revision: D66682124
This pull request was exported from Phabricator. Differential Revision: D66682124 |
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Reviewed By: jiayisuse Differential Revision: D66682124
8f7b9ae
to
fdcc59e
Compare
This pull request was exported from Phabricator. Differential Revision: D66682124 |
…rent device types (pytorch#2606) Summary: For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. The changes should be backward compatible and not impact existing behavior Reviewed By: jiayisuse Differential Revision: D66682124
fdcc59e
to
f6b54cd
Compare
This pull request was exported from Phabricator. Differential Revision: D66682124 |
Summary:
For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module.
The changes should be backward compatible and not impact existing behavior
Differential Revision: D66682124