Skip to content
This repository has been archived by the owner on Dec 11, 2024. It is now read-only.

Update the CM MLPerf inference docs for CUDA device running on host #14

Open
arjunsuresh opened this issue Sep 27, 2024 · 1 comment
Open
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@arjunsuresh
Copy link

We need to update the MLPerf inference docs for native CUDA runs

  1. Add a remark that unless CUDA, cuDNN and TensorRT are available in the environment it is recommended to use the docker option
  2. In the run options specify the flags to pass in the cuDNN and TensorRT run files
@anandhu-eng
Copy link

Hi @arjunsuresh , this PR adds the first point.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
Archived in project
Development

No branches or pull requests

2 participants