Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel warmup when using multiple GPUs #292

Open
asaff1 opened this issue Jan 12, 2025 · 0 comments
Open

Parallel warmup when using multiple GPUs #292

asaff1 opened this issue Jan 12, 2025 · 0 comments

Comments

@asaff1
Copy link

asaff1 commented Jan 12, 2025

Is your feature request related to a problem? Please describe.
Not sure if this is specific to the onnx backend.
When creating model_warmup { .... } entries in config.pbtxt, and the system has two GPUs,
Triton will run ModelInitialize for each GPU, and the warmup will run serially - it will first run warmup requests on the first GPU, then after all done it will run on the second GPU, etc.

Describe the solution you'd like
I'd like the warmup requests to run on all the GPUs in parallel, to speed up model startup time. Otherwise startup time is quite slow.

Describe alternatives you've considered
I could manually warm up the model, but I cannot see how to place a request on a specific GPU.

Additional context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant