-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: GPU support. #192
Comments
Hi |
Thanks a lot. That gave me a lot of headaches. As Pycharm did work, but onnx neither here or this plugin, nor for onnxruntime. I just, updated cudnn it worked for DEEPNESS. Than removed the CPU option for onnxruntime and the gpu was taken. That was easy. ;) What makes me wonder is the timing I run the model with DEEPNESS and onnxruntime.
What I want to say is, theplugin is much slower on CPU, that what I tried in code. |
A symbol/message would be nice, whether GPU, etc. was detected (before running). |
Hi, |
As the plugin used the Gpu lately I couldn't test, why the CPU was so slow. I read about speedups of about 40 X for using the GPU. That would be the correct order of magnitude. Or even better. What I noticed that when I select the raster with a 10 cm resolution, than loading the default parameter does not lead to selecting 10 cm as parameter for the net. For 20 cm it did (often?). |
While we were discussing about the CPU speed, too. Yestday, I tried the same mode in Deepness on a Laptop with CPU. Used a Tuxedo Pulse1. Which a generation a lower generation than the Ryzen 5800X I tried before and it's a mobile chip. Still it inference is much faster. But the notebook uses All threads, on the Windows Machine which the faster chip, this never maxed out! Therefore an order of magnitude slower. Hopez this new data point helps. |
Could you be more specific on which instructions to follow? |
Hi, From what I remember and what I see in the code Deepness installs On windows we did not manage to run it with GPU, and it is more complicated, so Deepness installs only the I agree it would be worth adding to the documentation on how to run it with GPU, along with the feature previously mentioned in this thread. Are you running on Linux or Windows? |
I am running on linux Ubuntu 22.04.5, 13th gen Intel Core I9-13900x32, 64 GB RAM and NVDIA RTX A4000.
|
I tried the solar panel Segmentation model from the model zoo.
But the execution was ruther slow it took 80+ minutes for a digital Orthophoto tile (20 cm resultion, 5000 points in each direction).
Inference was done on the CPU, question can I do the inference with Deepness on GPU, too?
Or is it feasible to do?
CPU: Ryzen 7 5800X
GPU: NVidea 1660Ti
The text was updated successfully, but these errors were encountered: