Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about TiedLayerSpec #31

Open
josephwong14wkh opened this issue Apr 29, 2024 · 0 comments
Open

Questions about TiedLayerSpec #31

josephwong14wkh opened this issue Apr 29, 2024 · 0 comments

Comments

@josephwong14wkh
Copy link

Do you know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant