Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate device type for heterogenous sharding of table across different device types #2606

Closed
wants to merge 1 commit into from

Conversation

faran928
Copy link
Contributor

@faran928 faran928 commented Dec 4, 2024

Summary:
For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module.

The changes should be backward compatible and not impact existing behavior

Differential Revision: D66682124

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 4, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

faran928 added a commit to faran928/torchrec that referenced this pull request Dec 5, 2024
…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

faran928 added a commit to faran928/torchrec that referenced this pull request Dec 18, 2024
…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

faran928 added a commit to faran928/torchrec that referenced this pull request Dec 18, 2024
…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

faran928 added a commit to faran928/torchrec that referenced this pull request Dec 20, 2024
…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

faran928 added a commit to faran928/torchrec that referenced this pull request Dec 20, 2024
…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Reviewed By: jiayisuse

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

…rent device types (pytorch#2606)

Summary:

For row wise heterogenous sharding of tables acorss cuda and cpu, we propagate the correct device type within each look up module based on which shard of each table is being looked up / fetched within that module. 

We also move some of the wrapper functions that can enable us to pass batch info information correctly across different modules during model split. 

The changes should be backward compatible and not impact existing behavior

Reviewed By: jiayisuse

Differential Revision: D66682124
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66682124

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants