-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda_stub for Windows #6993
Comments
What happens when you try it on Windows? It might just need a BUILD file change. I thought about this when making that change, and my conclusion about Windows was "this will probably work, under the Windows x64 ABI convention". The use of a GNU-syntax assembler script might be problematic; we might need to use nasm instead. |
It just reaches this branch xla/third_party/tsl/tsl/cuda/stub.bzl Line 24 in 5f3417f
I don't have access to the GPU machine during weekend and will post the error log next week if needed. |
The error log:
|
Well, note that XLA on GPU on Windows is community-supported. So we'll welcome PRs, but you'll have to drive this! For this specific issue: try adding a condition that handles Windows x86_64 the same way as Linux x86_64? |
@hawkinsp I've been trying to get CUDA support to compile on Windows and among other issues, I bumped into this one. I tried adding a case for Windows and the stubs are generated, but then when trying to compile them I get a bunch of compilation errors about that assembly code not having the right syntax (even the starting block comments in each file result in errors). I can get past these compilation errors if I remove the FYI I've started putting up PRs with other changes required for Windows support (#15444, #15448). |
Ok I finally resolved all other issues and was able to compile the library by excluding the
I'm not really familiar at all with these assembly files and their syntax. Do you know what a good next step for how to proceed would be? |
It seems that the trampoline written in assembly in https://github.com/yugr/Implib.so should be ported to Windows. In fact, I wonder why xla needs to build the stubs as binary when it already knows all the names of functions, which can be looked up dynamically in runtime. |
Imported from GitHub PR openxla/xla#15518 I'm not 100% that I'm doing the right thing but I'll just say that after this, I got rid of some compilation errors on Windows and the linker seems to be happy so I think it may be ok. But let me describe my reasoning: 1. This [commit](openxla/xla@b021ae8) added support for lazily loading symbols from the CUDA shared libraries. 2. As discussed in [this issue](openxla/xla#6993), the current approach is not supported on Windows due to the use of GNU assembly (it results in compilation errors with MSVC). 3. After reading a little into this, I believe that Windows libraries built with MSVC already load the linked dynamic libraries lazily and so it appears that this trampoline mechanism is not needed on Windows. 4. For this reason, I made the trampoline bits conditional on not being on Windows hoping that the symbols will not directly be resolved to the CUDA dependencies that are coming in via the CUDA header files. @metab0t is this what you were suggesting in the end of that discussion? cc @ddunl (this should be the last PR that touches the TSL code for Windows CUDA support) Copybara import of the project: -- ccd500cea1fa08f65c680d67b02d444c29422981 by eaplatanios <[email protected]>: Added support for compiling the CUDA stubs on Windows. Merging this change closes #15518 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#15518 from eaplatanios:u/eaplatanios/cuda-stubs-windows ccd500cea1fa08f65c680d67b02d444c29422981 PiperOrigin-RevId: 659724373
Imported from GitHub PR openxla/xla#15518 I'm not 100% that I'm doing the right thing but I'll just say that after this, I got rid of some compilation errors on Windows and the linker seems to be happy so I think it may be ok. But let me describe my reasoning: 1. This [commit](openxla/xla@b021ae8) added support for lazily loading symbols from the CUDA shared libraries. 2. As discussed in [this issue](openxla/xla#6993), the current approach is not supported on Windows due to the use of GNU assembly (it results in compilation errors with MSVC). 3. After reading a little into this, I believe that Windows libraries built with MSVC already load the linked dynamic libraries lazily and so it appears that this trampoline mechanism is not needed on Windows. 4. For this reason, I made the trampoline bits conditional on not being on Windows hoping that the symbols will not directly be resolved to the CUDA dependencies that are coming in via the CUDA header files. @metab0t is this what you were suggesting in the end of that discussion? cc @ddunl (this should be the last PR that touches the TSL code for Windows CUDA support) Copybara import of the project: -- ccd500cea1fa08f65c680d67b02d444c29422981 by eaplatanios <[email protected]>: Added support for compiling the CUDA stubs on Windows. Merging this change closes #15518 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#15518 from eaplatanios:u/eaplatanios/cuda-stubs-windows ccd500cea1fa08f65c680d67b02d444c29422981 PiperOrigin-RevId: 659724373
Imported from GitHub PR openxla/xla#15518 I'm not 100% that I'm doing the right thing but I'll just say that after this, I got rid of some compilation errors on Windows and the linker seems to be happy so I think it may be ok. But let me describe my reasoning: 1. This [commit](openxla/xla@b021ae8) added support for lazily loading symbols from the CUDA shared libraries. 2. As discussed in [this issue](openxla/xla#6993), the current approach is not supported on Windows due to the use of GNU assembly (it results in compilation errors with MSVC). 3. After reading a little into this, I believe that Windows libraries built with MSVC already load the linked dynamic libraries lazily and so it appears that this trampoline mechanism is not needed on Windows. 4. For this reason, I made the trampoline bits conditional on not being on Windows hoping that the symbols will not directly be resolved to the CUDA dependencies that are coming in via the CUDA header files. @metab0t is this what you were suggesting in the end of that discussion? cc @ddunl (this should be the last PR that touches the TSL code for Windows CUDA support) Copybara import of the project: -- ccd500cea1fa08f65c680d67b02d444c29422981 by eaplatanios <[email protected]>: Added support for compiling the CUDA stubs on Windows. Merging this change closes #15518 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#15518 from eaplatanios:u/eaplatanios/cuda-stubs-windows ccd500cea1fa08f65c680d67b02d444c29422981 PiperOrigin-RevId: 659724373
Imported from GitHub PR openxla/xla#15518 I'm not 100% that I'm doing the right thing but I'll just say that after this, I got rid of some compilation errors on Windows and the linker seems to be happy so I think it may be ok. But let me describe my reasoning: 1. This [commit](openxla/xla@b021ae8) added support for lazily loading symbols from the CUDA shared libraries. 2. As discussed in [this issue](openxla/xla#6993), the current approach is not supported on Windows due to the use of GNU assembly (it results in compilation errors with MSVC). 3. After reading a little into this, I believe that Windows libraries built with MSVC already load the linked dynamic libraries lazily and so it appears that this trampoline mechanism is not needed on Windows. 4. For this reason, I made the trampoline bits conditional on not being on Windows hoping that the symbols will not directly be resolved to the CUDA dependencies that are coming in via the CUDA header files. @metab0t is this what you were suggesting in the end of that discussion? cc @ddunl (this should be the last PR that touches the TSL code for Windows CUDA support) Copybara import of the project: -- ccd500cea1fa08f65c680d67b02d444c29422981 by eaplatanios <[email protected]>: Added support for compiling the CUDA stubs on Windows. Merging this change closes #15518 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#15518 from eaplatanios:u/eaplatanios/cuda-stubs-windows ccd500cea1fa08f65c680d67b02d444c29422981 PiperOrigin-RevId: 659724373
What is the final goal of this effort? Note, as for the stubs (including
What I'm trying to say is that unless this has intention to also have a proper support of CUDA in runtime (i.r. run legit cuda-dependent tests) trying to make it just compile for windows is more likely to cause more harm than good. At the same time, proper maintaining of working CUDA on windows is a colossal task, which I don't think is possible without having a dedicated team (or at least a few people) and entire build/test infrastructure behind it. If there is no plan to have all that I truly don't think we should be trying to make this compile under Windows, because compiling it does not bring us much closer to it actually working, but introduces windows-specific logic in the build, which is not being properly tested but has to be maintained. |
@vam-google I responded here mentioning that I decided to give up on this. Dynamic linking on Windows is beyond my knowledge and this turned out to be a lot more work than I was expecting to be worth it. Instead I'll try to figure something out using WSL and the Linux CUDA build of XLA. |
This commit b021ae8 introduces
cuda_stub
but it is not implemented for Windows.Can we run
make_stub.py
under Windows?The text was updated successfully, but these errors were encountered: