Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pytree flatten/unflatten for KeyedTensor #1899

Closed
wants to merge 1 commit into from

Conversation

PaulZhang12
Copy link
Contributor

Differential Revision: D56320582

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56320582

Summary:

KeyedTensor length_per_key represents size of embeddings after lookup, which are static. Reflecting that in the pytree case to help torch.export understand constraints instead of allocating new unbacked SymInts

Differential Revision: D56320582
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56320582

PaulZhang12 added a commit to PaulZhang12/torchrec that referenced this pull request Apr 19, 2024
Summary:

KeyedTensor length_per_key represents size of embeddings after lookup, which are static. Reflecting that in the pytree case to help torch.export understand constraints instead of allocating new unbacked SymInts

Differential Revision: D56320582
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants