You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A container somehow similar to the thrust::device vectors would reduce are code base again and should make
algorithms on its memory more safe. For HASEonGPU we only need a container equal to std::array. Therefore no dynamic size increase is necessary.
For the algorithm side, we need a reduce and exclusive scan/prefix sum algorithm. Based on alpaka buffers and a wrapper for the container would be perfect 😸
I would suggest to create a separate repository for this containers and algorithms. Name suggestions ? 🐯
The text was updated successfully, but these errors were encountered:
Is it possible to use thrust directly as the implementation for the CUDA backend? Or would that possibly interfere with the use of alpaka buffers?
As for naming suggestions: Since it emulates the "look and feel" of the STL, what about "Alpaka Standard Template Library [ASTL]", with a namespace alp::
A container somehow similar to the thrust::device vectors would reduce are code base again and should make
algorithms on its memory more safe. For HASEonGPU we only need a container equal to std::array. Therefore no dynamic size increase is necessary.
For the algorithm side, we need a reduce and exclusive scan/prefix sum algorithm. Based on alpaka buffers and a wrapper for the container would be perfect 😸
I would suggest to create a separate repository for this containers and algorithms. Name suggestions ? 🐯
The text was updated successfully, but these errors were encountered: