-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Context #9
Comments
Hi, The CUDA Context is by default created by the ZED SDK during the In the past, some users work around this in python with a script (see this issue) |
Thank you @adujardin, in fact I already wanted to rewrite this python script to C++. |
Yes, the GPU buffers can be shared within the same CUDA Context without copying. However, sharing the same context basically means there's no overlap possible and the computing of the ZED SDK and the inference won't be in parallel anymore. It might still be faster at the end but not necessarily. NB: To get parallelism while sharing a context you need to create non-blocking streams but you can't control it in the ZED SDK (it uses the default one) and I doubt you can in TensorFlow either. |
But if ZED and TF can parallelize their tasks so that all cores are used there's no benefit from overlap? |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days |
Hi, could you please share some details how you manage context under the hood? In python code we can see only memory limit for Tensorflow.
By the way, is GPU mandatory for depth processing?
Regards,
The text was updated successfully, but these errors were encountered: