Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limited GPU Memory Utilization in ColBERT Deployment #293

Open
nauyan opened this issue Jan 17, 2025 · 0 comments
Open

Limited GPU Memory Utilization in ColBERT Deployment #293

nauyan opened this issue Jan 17, 2025 · 0 comments

Comments

@nauyan
Copy link

nauyan commented Jan 17, 2025

Background:

I previously deployed ColBERT in Python using the fastembed library with GPU support.
During this deployment, I observed that it utilized only 2 GB of GPU memory out of the 16 GB available on my GPU.
Current Deployment:

To address this limited memory usage, I redeployed ColBERT on Triton Inference Server using the ONNX backend, expecting better GPU memory utilization.
However, I still observe that the deployment only utilizes approximately 2 GB of GPU memory, leaving most of the GPU memory unused.

Issue:

It appears that both fastembed and Triton deployments are not fully utilizing the available GPU memory.
I suspect there might be specific settings, configurations, or optimizations that could allow ColBERT to use more GPU memory.

Questions:

  1. Are there specific settings in Triton Inference Server, ONNX backend, or ColBERT configurations to increase GPU memory usage?
  2. Could this behavior be related to batch size, ONNX graph optimization, or other resource allocation parameters?
  3. Is this limited memory usage expected for ColBERT models, or could it indicate a bottleneck in deployment?

I am using model.onnx file available at hugginface for colbert.
here is my config.pbtxt file:
`name: "colbert-ir_colbertv2.0"
platform: "onnxruntime_onnx"
backend: "onnxruntime"
max_batch_size: 25

input [
{
name: "attention_mask"
data_type: TYPE_INT64
dims: [-1]
},
{
name: "input_ids"
data_type: TYPE_INT64
dims: [-1]
}
]

output [
{
name: "contextual"
data_type: TYPE_FP32
dims: [-1, -1]
}
]

optimization {
priority: PRIORITY_DEFAULT
input_pinned_memory {
enable: true
}
output_pinned_memory {
enable: true
}
}

dynamic_batching {
preferred_batch_size: [4]
max_queue_delay_microseconds: 0
}

instance_group [
{
name: "colbert-ir_colbertv2.0"
kind: KIND_GPU
count: 1
gpus: [0]
}
]

default_model_filename: "model.onnx"`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant