You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, tf-yarn is only able to reserve GPUs with node-level granularity, i.e. it assumes that a GPU node has a capacity of a single container, and then uses all of the GPUs on that node. It is possible to restrict tf-yarn container to a subset of GPUs:
augment TaskSpec with num_gpus field,
prior to running _dispatch_task discover which GPUs are not in use, and list them explicitly in CUDA_VISIBLE_DEVICES.
The text was updated successfully, but these errors were encountered:
Currently, tf-yarn is only able to reserve GPUs with node-level granularity, i.e. it assumes that a GPU node has a capacity of a single container, and then uses all of the GPUs on that node. It is possible to restrict tf-yarn container to a subset of GPUs:
TaskSpec
withnum_gpus
field,_dispatch_task
discover which GPUs are not in use, and list them explicitly in CUDA_VISIBLE_DEVICES.The text was updated successfully, but these errors were encountered: