You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I am just checking around the CUDA code to understand few things, and when looking at the code for the normalization, I found several instances of the following:
It looks like you are allocating managed memory if its not a windows 32 OS. I have some questions about why this is done:
Managed memory will behave differently in pre and post 6.x cc, so you may get different performance, "in the general case"
This code does not seem to need managed memory (if I am not wrong, please correct me!). You allocate an array on GPU and use it on the kernel, then free it. Managed memory will likely cause just slower execution. Why are you using it? Is it maybe because you are worried of running out of RAM, and it may help in the cc>6.0 archs ?
The text was updated successfully, but these errors were encountered:
Hi!
I am just checking around the CUDA code to understand few things, and when looking at the code for the normalization, I found several instances of the following:
https://github.com/NiftyPET/NIPET/blob/master/niftypet/nipet/src/norm.cu#L94-L98
It looks like you are allocating managed memory if its not a windows 32 OS. I have some questions about why this is done:
Managed memory will behave differently in pre and post 6.x cc, so you may get different performance, "in the general case"
This code does not seem to need managed memory (if I am not wrong, please correct me!). You allocate an array on GPU and use it on the kernel, then free it. Managed memory will likely cause just slower execution. Why are you using it? Is it maybe because you are worried of running out of RAM, and it may help in the cc>6.0 archs ?
The text was updated successfully, but these errors were encountered: