You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Not sure if this is specific to the onnx backend.
When creating model_warmup { .... } entries in config.pbtxt, and the system has two GPUs,
Triton will run ModelInitialize for each GPU, and the warmup will run serially - it will first run warmup requests on the first GPU, then after all done it will run on the second GPU, etc.
Describe the solution you'd like
I'd like the warmup requests to run on all the GPUs in parallel, to speed up model startup time. Otherwise startup time is quite slow.
Describe alternatives you've considered
I could manually warm up the model, but I cannot see how to place a request on a specific GPU.
Additional context
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Not sure if this is specific to the onnx backend.
When creating
model_warmup { .... }
entries in config.pbtxt, and the system has two GPUs,Triton will run
ModelInitialize
for each GPU, and the warmup will run serially - it will first run warmup requests on the first GPU, then after all done it will run on the second GPU, etc.Describe the solution you'd like
I'd like the warmup requests to run on all the GPUs in parallel, to speed up model startup time. Otherwise startup time is quite slow.
Describe alternatives you've considered
I could manually warm up the model, but I cannot see how to place a request on a specific GPU.
Additional context
The text was updated successfully, but these errors were encountered: