You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!
The text was updated successfully, but these errors were encountered:
Do you know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!
The text was updated successfully, but these errors were encountered: