Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wmeta: add nvml collector #32109

Merged
merged 9 commits into from
Jan 15, 2025
Merged

Conversation

gjulianm
Copy link
Contributor

@gjulianm gjulianm commented Dec 12, 2024

What does this PR do?

This PR adds the NVML collector to workloadmeta, so that we collect data about the NVIDIA GPUs present in the system that can be used in other parts of the agent, including the tagger.

Motivation

Uniform tags for GPU-related metrics, centralizing queries of GPU information.

Describe how you validated your changes

Unit tests included for the NVML collector using a mock. E2E tests also added to ensure workloadmeta is correctly populated.

Manual validation was done by starting the agent in a GPU enabled host and verifying with agent workload-list that there the expected data was present in the store.

Output of workload-list:

Repo tags: [601427279990.dkr.ecr.us-east-1.amazonaws.com/guillermo.julian/sandbox:operator-check]
Repo digests: [601427279990.dkr.ecr.us-east-1.amazonaws.com/guillermo.julian/sandbox@sha256:96e1eb5e18c3bd95daf6a39a26fa48481dceacd2fc85e7a026aac9faadd58344]
===

=== Entity gpu sources(merged):[runtime] id: GPU-0d930e09-fcba-2b2c-d8d5-4ed50eddb8eb ===
----------- Entity ID -----------
Kind: gpu ID: GPU-0d930e09-fcba-2b2c-d8d5-4ed50eddb8eb

----------- Entity Meta -----------
Name: Tesla T4
Namespace:

Vendor: nvidia
Device: Tesla T4
Active PIDs: []
Index: 1
===

=== Entity gpu sources(merged):[runtime] id: GPU-a1bbda8c-1439-53ce-eb65-829edc056822 ===
----------- Entity ID -----------
Kind: gpu ID: GPU-a1bbda8c-1439-53ce-eb65-829edc056822

----------- Entity Meta -----------
Name: Tesla T4
Namespace:

Vendor: nvidia
Device: Tesla T4
Active PIDs: []
Index: 2
===

=== Entity gpu sources(merged):[runtime] id: GPU-a5bf1a5a-6352-ed23-7b3f-0640c56d6cf7 ===
----------- Entity ID -----------
Kind: gpu ID: GPU-a5bf1a5a-6352-ed23-7b3f-0640c56d6cf7

----------- Entity Meta -----------
Name: Tesla T4
Namespace:

Vendor: nvidia
Device: Tesla T4
Active PIDs: []
Index: 3
===

=== Entity gpu sources(merged):[runtime] id: GPU-e55ebc61-27b2-5d57-1188-e73b52917fc2 ===
----------- Entity ID -----------
Kind: gpu ID: GPU-e55ebc61-27b2-5d57-1188-e73b52917fc2

----------- Entity Meta -----------
Name: Tesla T4
Namespace:

Vendor: nvidia
Device: Tesla T4
Active PIDs: []
Index: 0

Possible Drawbacks / Trade-offs

Additional Notes

A CheckLibraryExists function has been added, that calls dlopen to check if the NVML library exists. This is done so that we avoid importing NVML in pkg/config/env for feature detection, as that would add the NVML dependency to almost all agent packages and increase the binary size by almost 1MB.

In another PR we will allow a global setting for the NVML library path, as that setting would affect not only this code but also the GPU check.

@gjulianm gjulianm self-assigned this Dec 12, 2024
@gjulianm gjulianm force-pushed the guillermo.julian/gpu-workloadmeta branch from 3511c39 to ec6d319 Compare December 17, 2024 10:43
Base automatically changed from guillermo.julian/gpu-workloadmeta to main January 7, 2025 11:22
@gjulianm gjulianm force-pushed the guillermo.julian/gpu-wmeta-collector branch from 8c4fcba to 903ea80 Compare January 8, 2025 10:18
@github-actions github-actions bot added long review PR is complex, plan time to review it team/ebpf-platform labels Jan 8, 2025
@gjulianm gjulianm added changelog/no-changelog qa/done QA done before merge and regressions are covered by tests labels Jan 8, 2025
@gjulianm gjulianm force-pushed the guillermo.julian/gpu-wmeta-collector branch 2 times, most recently from d667588 to aa84a91 Compare January 8, 2025 12:01
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jan 8, 2025

Uncompressed package size comparison

Comparison with ancestor 7eae6b9eb0717c717d67211828ed3f592eac0b8d

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 1.21MB 1005.74MB 1004.53MB 0.50MB
datadog-agent-x86_64-rpm 1.21MB 1015.06MB 1013.85MB 0.50MB
datadog-agent-x86_64-suse 1.21MB 1015.06MB 1013.85MB 0.50MB
datadog-agent-aarch64-rpm 1.16MB 998.94MB 997.78MB 0.50MB
datadog-agent-arm64-deb 1.16MB 989.64MB 988.48MB 0.50MB
datadog-heroku-agent-amd64-deb 0.02MB ⚠️ 560.99MB 560.97MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.01MB ⚠️ 109.70MB 109.70MB 0.50MB
datadog-dogstatsd-amd64-deb 0.01MB ⚠️ 58.84MB 58.83MB 0.50MB
datadog-dogstatsd-arm64-deb 0.01MB ⚠️ 56.34MB 56.33MB 0.50MB
datadog-iot-agent-arm64-deb 0.01MB ⚠️ 109.63MB 109.63MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.01MB ⚠️ 58.91MB 58.90MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.01MB ⚠️ 58.91MB 58.90MB 0.50MB
datadog-iot-agent-amd64-deb 0.01MB ⚠️ 114.21MB 114.20MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.01MB ⚠️ 114.28MB 114.27MB 0.50MB
datadog-iot-agent-x86_64-suse 0.01MB ⚠️ 114.28MB 114.27MB 0.50MB

Decision

❌ Failed

@gjulianm gjulianm force-pushed the guillermo.julian/gpu-wmeta-collector branch from 9fc9d6e to 89ba025 Compare January 8, 2025 13:10
Copy link

cit-pr-commenter bot commented Jan 8, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 5cfa1088-a8ac-4733-9c8c-d274fa6ddb6d

Baseline: 7eae6b9
Comparison: 8a226b3
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
uds_dogstatsd_to_api_cpu % cpu utilization +0.94 [+0.23, +1.65] 1 Logs
quality_gate_idle_all_features memory utilization +0.72 [+0.64, +0.80] 1 Logs bounds checks dashboard
file_tree memory utilization +0.67 [+0.53, +0.81] 1 Logs
quality_gate_idle memory utilization +0.52 [+0.47, +0.56] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency egress throughput +0.18 [-0.59, +0.95] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.03 [-0.69, +0.74] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput +0.02 [-0.93, +0.98] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.00 [-0.85, +0.85] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.01, +0.01] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.01 [-0.65, +0.63] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.02 [-0.12, +0.09] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput -0.02 [-0.92, +0.88] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.19 [-0.98, +0.60] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.20 [-0.26, -0.14] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.32 [-0.78, +0.14] 1 Logs
quality_gate_logs % cpu utilization -2.93 [-6.05, +0.19] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@gjulianm gjulianm force-pushed the guillermo.julian/gpu-wmeta-collector branch from 89ba025 to 66a0d1c Compare January 8, 2025 14:09
@github-actions github-actions bot added medium review PR review might take time long review PR is complex, plan time to review it and removed long review PR is complex, plan time to review it medium review PR review might take time labels Jan 8, 2025
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jan 8, 2025

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=52824148 --os-family=ubuntu

Note: This applies to commit 8a226b3

@gjulianm gjulianm force-pushed the guillermo.julian/gpu-wmeta-collector branch from c8148b2 to 8f6cf87 Compare January 9, 2025 11:04
@gjulianm gjulianm marked this pull request as ready for review January 9, 2025 11:05
@gjulianm gjulianm requested review from a team as code owners January 9, 2025 11:05
@gjulianm gjulianm requested a review from hush-hush January 9, 2025 11:05
@gjulianm gjulianm added the ask-review Ask required teams to review this PR label Jan 9, 2025
// This product includes software developed at Datadog (https://www.datadoghq.com/).
// Copyright 2024-present Datadog, Inc.

// Package nvml implements the NVML collector for workloadmeta
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this file ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's really needed, I was following the structure of the other collectors that do have this file (and the _nop.go file too)

Comment on lines 255 to 260
log.Infof("Agent did not find NVML library: %v", err)
return
}

features[NVML] = struct{}{}
log.Infof("Agent found NVML library")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those log line looks like debug information. no ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other detect* functions log similar information with Info level, maybe I could downgrade the one for "not finding it" as most installations won't have this library

Comment on lines +20 to +25
// CheckLibraryExists checks if a library is available on the system by trying it to
// open with dlopen. It returns an error if the library is not found. This is
// the most direct way to check for a library's presence on Linux, as there are
// multiple sources for paths for library searches, so it's better to use the
// same mechanism that the loader uses.
func CheckLibraryExists(libname string) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really the only way ? Could we not use something like ldconfig -p or similar ?

Loading the entire library in memory to test if it exists seems a bit overkill, no ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's not a straightforward way to do this, ldconfig -p requires parsing a lot of lines so it seems more fragile. Also, if the library is found the most likely scenario is that we will be loading it, so we are not losing that much.

@usamasaqib
Copy link
Contributor

Does the PR require modifying all these go.mod files?

@gjulianm
Copy link
Contributor Author

gjulianm commented Jan 9, 2025

Does the PR require modifying all these go.mod files?

Apparently yes, due to bringing in the import to pkg/util/system it bumped the version in all those files.


event := workloadmeta.CollectorEvent{
Source: workloadmeta.SourceRuntime,
Type: workloadmeta.EventTypeSet,
Copy link
Contributor

@gabedos gabedos Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious: are there any cases where an Nvidia GPU can be "removed" from the environment or nvml can't detect it anymore requiring an `workloadmeta.EventTypeUnset? In the GPU case it seems impossible to happen but this is the only difference between the other collectors. Otherwise your collector looks good 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A GPU could get disconnected (e.g., disabling the PCI device), although that's not normal operation. I can add support for that just in case though.

Comment on lines +251 to +253
// Use dlopen to search for the library to avoid importing the go-nvml package here,
// which is 1MB in size and would increase the agent binary size, when we don't really
// need it for anything else.
Copy link
Member

@GustavoCaso GustavoCaso Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this comment is not accurate.

Since we build the NVML collector for linux environment, the collector includes go-nvml library

"github.com/NVIDIA/go-nvml/pkg/nvml"
even if the NVM shared object is not part of the system.

There is not an easy way to solve today. Go do not provides ways for adding libraries at runtime base on condition. We could use build flags, but that might be more complex in the end. I think we should update the comment to avoid confusion and leave as it is 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I found was that importing go-nvml meant it was imported in a lot of packages, and I think it ended up in other binaries too, the resulting size increase was 1-2MB.

@zhuminyi
Copy link
Contributor

Could you add more description on test/QA. For example,

  • How is the collector enabled in the config (nvml installed?)
  • the output of agent workload-list


var events []workloadmeta.CollectorEvent
for i := 0; i < count; i++ {
dev, ret := c.nvmlLib.DeviceGetHandleByIndex(i)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the permission for core agent an issue? I thought one of your RFC mentioned insufficient permission to access the GPU via nvml lib

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's only an issue in our staging environments, in any case the permission failure would show up before when trying to start the collector. We haven't seen failures once the NVML library is initialised, as it seems to keep the handles to the devices open.

@gjulianm
Copy link
Contributor Author

Could you add more description on test/QA. For example,

  • How is the collector enabled in the config (nvml installed?)
  • the output of agent workload-list

The collector is started automatically if the NVML library is detected. Added the output of workload-list (actually, thanks a lot for mentioning this because I thought I had validated this PR and I hadn't done so properly, so I caught some issues thanks to you).

Copy link

cit-pr-commenter bot commented Jan 13, 2025

Go Package Import Differences

Baseline: 7eae6b9
Comparison: 8a226b3

binaryosarchchange
agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
iot-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
iot-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
heroku-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
cluster-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
cluster-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
cluster-agent-cloudfoundrylinuxamd64
+3, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
+github.com/NVIDIA/go-nvml/pkg/dl
+github.com/NVIDIA/go-nvml/pkg/nvml
cluster-agent-cloudfoundrylinuxarm64
+3, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
+github.com/NVIDIA/go-nvml/pkg/dl
+github.com/NVIDIA/go-nvml/pkg/nvml
process-agentlinuxamd64
+3, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
+github.com/NVIDIA/go-nvml/pkg/dl
+github.com/NVIDIA/go-nvml/pkg/nvml
process-agentlinuxarm64
+3, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
+github.com/NVIDIA/go-nvml/pkg/dl
+github.com/NVIDIA/go-nvml/pkg/nvml
process-agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
process-agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
process-agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
heroku-process-agentlinuxamd64
+3, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml
+github.com/NVIDIA/go-nvml/pkg/dl
+github.com/NVIDIA/go-nvml/pkg/nvml
security-agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/comp/core/workloadmeta/collectors/internal/nvml

@gjulianm gjulianm requested a review from a team as a code owner January 13, 2025 11:32
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jan 13, 2025

Gitlab CI Configuration Changes

Modified Jobs

.on_gpu_or_e2e_changes
  .on_gpu_or_e2e_changes:
  - if: $RUN_E2E_TESTS == "off"
    when: never
  - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
    when: never
  - if: $RUN_E2E_TESTS == "on"
    when: on_success
  - if: $CI_COMMIT_BRANCH == "main"
    when: on_success
  - if: $CI_COMMIT_BRANCH =~ /^[0-9]+\.[0-9]+\.x$/
    when: on_success
  - if: $CI_COMMIT_TAG =~ /^[0-9]+\.[0-9]+\.[0-9]+-rc\.[0-9]+$/
    when: on_success
  - changes:
      compare_to: main
      paths:
      - .gitlab/e2e/e2e.yml
      - test/new-e2e/pkg/**/*
      - test/new-e2e/go.mod
      - flakes.yaml
  - changes:
      compare_to: main
      paths:
      - pkg/gpu/**/*
      - test/new-e2e/tests/gpu/**/*
      - pkg/collector/corechecks/gpu/**/*
+     - comp/core/workloadmeta/collectors/internal/nvml/**/*
new-e2e-gpu
  new-e2e-gpu:
    after_script:
    - $CI_PROJECT_DIR/tools/ci/junit_upload.sh
    artifacts:
      expire_in: 2 weeks
      paths:
      - $E2E_OUTPUT_DIR
      - junit-*.tgz
      reports:
        annotations:
        - $EXTERNAL_LINKS_PATH
      when: always
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache_e2e.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache_e2e.tar.xz
    - mkdir -p ~/.aws
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $AGENT_QA_E2E profile >> ~/.aws/config
      || exit $?
    - export AWS_PROFILE=agent-qa-ci
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $AGENT_QA_E2E ssh_public_key_rsa > $E2E_AWS_PUBLIC_KEY_PATH
      || exit $?
    - touch $E2E_AWS_PRIVATE_KEY_PATH && chmod 600 $E2E_AWS_PRIVATE_KEY_PATH && $CI_PROJECT_DIR/tools/ci/fetch_secret.sh
      $AGENT_QA_E2E ssh_key_rsa > $E2E_AWS_PRIVATE_KEY_PATH || exit $?
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $AGENT_QA_E2E ssh_public_key_rsa > $E2E_AZURE_PUBLIC_KEY_PATH
      || exit $?
    - touch $E2E_AZURE_PRIVATE_KEY_PATH && chmod 600 $E2E_AZURE_PRIVATE_KEY_PATH &&
      $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $AGENT_QA_E2E ssh_key_rsa > $E2E_AZURE_PRIVATE_KEY_PATH
      || exit $?
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $AGENT_QA_E2E ssh_public_key_rsa > $E2E_GCP_PUBLIC_KEY_PATH
      || exit $?
    - touch $E2E_GCP_PRIVATE_KEY_PATH && chmod 600 $E2E_GCP_PRIVATE_KEY_PATH && $CI_PROJECT_DIR/tools/ci/fetch_secret.sh
      $AGENT_QA_E2E ssh_key_rsa > $E2E_GCP_PRIVATE_KEY_PATH || exit $?
    - pulumi login "s3://dd-pulumi-state?region=us-east-1&awssdk=v2&profile=$AWS_PROFILE"
    - ARM_CLIENT_ID=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $E2E_AZURE client_id)
      || exit $?; export ARM_CLIENT_ID
    - ARM_CLIENT_SECRET=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $E2E_AZURE token)
      || exit $?; export ARM_CLIENT_SECRET
    - ARM_TENANT_ID=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $E2E_AZURE tenant_id)
      || exit $?; export ARM_TENANT_ID
    - ARM_SUBSCRIPTION_ID=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $E2E_AZURE subscription_id)
      || exit $?; export ARM_SUBSCRIPTION_ID
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $E2E_GCP credentials_json > ~/gcp-credentials.json
      || exit $?
    - export GOOGLE_APPLICATION_CREDENTIALS=~/gcp-credentials.json
    - inv -e gitlab.generate-ci-visibility-links --output=$EXTERNAL_LINKS_PATH
    image: registry.ddbuild.io/ci/test-infra-definitions/runner$TEST_INFRA_DEFINITIONS_BUILDIMAGES_SUFFIX:$TEST_INFRA_DEFINITIONS_BUILDIMAGES
    needs:
    - go_e2e_deps
    - deploy_deb_testing-a7_x64
    rules:
    - if: $RUN_E2E_TESTS == "off"
      when: never
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - if: $RUN_E2E_TESTS == "on"
      when: on_success
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success
    - if: $CI_COMMIT_BRANCH =~ /^[0-9]+\.[0-9]+\.x$/
      when: on_success
    - if: $CI_COMMIT_TAG =~ /^[0-9]+\.[0-9]+\.[0-9]+-rc\.[0-9]+$/
      when: on_success
    - changes:
        compare_to: main
        paths:
        - .gitlab/e2e/e2e.yml
        - test/new-e2e/pkg/**/*
        - test/new-e2e/go.mod
        - flakes.yaml
    - changes:
        compare_to: main
        paths:
        - pkg/gpu/**/*
        - test/new-e2e/tests/gpu/**/*
        - pkg/collector/corechecks/gpu/**/*
+       - comp/core/workloadmeta/collectors/internal/nvml/**/*
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - allow_failure: true
      when: manual
    script:
    - inv -e new-e2e-tests.run --targets $TARGETS -c ddagent:imagePullRegistry=669783387624.dkr.ecr.us-east-1.amazonaws.com
      -c ddagent:imagePullUsername=AWS -c ddagent:imagePullPassword=$(aws ecr get-login-password)
      --junit-tar junit-${CI_JOB_ID}.tgz ${EXTRA_PARAMS} --test-washer --logs-folder=$E2E_OUTPUT_DIR/logs
      --logs-post-processing --logs-post-processing-test-depth=$E2E_LOGS_PROCESSING_TEST_DEPTH
    stage: e2e
    tags:
    - arch:amd64
    variables:
      E2E_AWS_PRIVATE_KEY_PATH: /tmp/agent-qa-aws-ssh-key
      E2E_AWS_PUBLIC_KEY_PATH: /tmp/agent-qa-aws-ssh-key.pub
      E2E_AZURE_PRIVATE_KEY_PATH: /tmp/agent-qa-azure-ssh-key
      E2E_AZURE_PUBLIC_KEY_PATH: /tmp/agent-qa-azure-ssh-key.pub
      E2E_COMMIT_SHA: $CI_COMMIT_SHORT_SHA
      E2E_GCP_PRIVATE_KEY_PATH: /tmp/agent-qa-gcp-ssh-key
      E2E_GCP_PUBLIC_KEY_PATH: /tmp/agent-qa-gcp-ssh-key.pub
      E2E_KEY_PAIR_NAME: datadog-agent-ci-rsa
      E2E_LOGS_PROCESSING_TEST_DEPTH: 1
      E2E_OUTPUT_DIR: $CI_PROJECT_DIR/e2e-output
      E2E_PIPELINE_ID: $CI_PIPELINE_ID
      E2E_PULUMI_LOG_LEVEL: 10
      EXTERNAL_LINKS_PATH: external_links_$CI_JOB_ID.json
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 16Gi
      KUBERNETES_MEMORY_REQUEST: 12Gi
      SHOULD_RUN_IN_FLAKES_FINDER: 'true'
      TARGETS: ./tests/gpu
      TEAM: ebpf-platform

Changes Summary

Removed Modified Added Renamed
0 2 0 0

ℹ️ Diff available in the job log.

@KSerrania
Copy link
Contributor

Exception approved for bypassing the package size check, follow-up work is tracked in EBPF-631. I'm going to force-merge the PR.

@KSerrania KSerrania merged commit 14ca022 into main Jan 15, 2025
329 of 331 checks passed
@KSerrania KSerrania deleted the guillermo.julian/gpu-wmeta-collector branch January 15, 2025 10:21
@github-actions github-actions bot added this to the 7.63.0 milestone Jan 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ask-review Ask required teams to review this PR changelog/no-changelog long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/ebpf-platform
Projects
None yet
Development

Successfully merging this pull request may close these issues.