You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Important Note: NVIDIA AI Enterprise customers can get support from NVIDIA Enterprise support. Please open a case here.
Briefly explain the issue in terms of expected behavior and current behavior.
The init container of the gpu-driver pod (k8s-driver-manager) is set to run uninstall_driver on the driver manager script. The problem with this is that it does not check if a driver is currently installed when using the compiled driver route. This means that any node reboot will trigger a unecessary driver re-compile and install even if no new driver is available.
The nodes already expose which version of the driver is installed through labels, so it should not try to uninstall the driver a driver of the same version is already installed.
@slik13 this is the current limitation, and we have a feature in the roadmap to avoid this. Currently, we use bind mount to mount necessary installation files (/usr/bin, /lib/modules, /lib) from the container to under /run/nvidia/driver on the host. So on every driver container restart the mount is removed. We want to persist these files onto a persistent driver root and configure the nvidia-container-toolkit to look up that path instead.
Thanks for the quick response, I think I understand the reason for the current behavior better now. Would you have something I could track (like a feature number, etc.) to keep an eye on the addition of this feature?
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Important Note: NVIDIA AI Enterprise customers can get support from NVIDIA Enterprise support. Please open a case here.
1. Quick Debug Information
2. Issue or feature description
Briefly explain the issue in terms of expected behavior and current behavior.
The init container of the gpu-driver pod (k8s-driver-manager) is set to run uninstall_driver on the driver manager script. The problem with this is that it does not check if a driver is currently installed when using the compiled driver route. This means that any node reboot will trigger a unecessary driver re-compile and install even if no new driver is available.
The nodes already expose which version of the driver is installed through labels, so it should not try to uninstall the driver a driver of the same version is already installed.
See uninstall code here: https://github.com/NVIDIA/k8s-driver-manager/blob/master/driver-manager#L573
3. Steps to reproduce the issue
Detailed steps to reproduce the issue.
The text was updated successfully, but these errors were encountered: