-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom rocm hip and c++ extensions #2342
Labels
Comments
@jeffdaily We have some internal documentation that highlights some of the differences in enabling PyTorch extensions for ROCm. Shall I put that together into something we can publish on the pytorch documentation? |
/assigntome |
This issue has been unassigned due to inactivity. If you are still planning to work on this, you can still send a PR referencing this issue. |
svekars
added
docathon-h2-2023
and removed
docathon-h1-2023
A label for the docathon in H1 2023
labels
Oct 30, 2023
/assigntome |
4 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
🚀 The feature, motivation and pitch
Dear PyTorch developers and community,
We have nice tutorial cpp_extension on custom cuda extensions written by Peter Goldsborough. I’m wondering if the same can be done but on AMD GPUs with kernels written using rocm HIP. I mean the following: call custom forward+backward hip kernel from pytorch and include it in deep learning pipeline. Is it currently supported and are there any limitations?
Does somebody have experience of writing custom hip/c++ kernels and using them in pytorch?
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport
The text was updated successfully, but these errors were encountered: