-
We run .NET Core applications using the mcr.microsoft.com/dotnet/aspnet:8.0 image on Rancher v1.26.11 + rke2r1. While the application reports approximately 10 MiB through the dotnet_total_memory_bytes metric, the "top pod" reports about 90 MiB through the container_memory_working_set_bytes metric. We ran these tests using several different applications and also tried replacing the default Rancher rke2-metrics-server with the metrics-server Helm chart, but obtained the same results. We tested the same application in AWS EKS, where these two metrics are almost identical. Can anyone explain the discrepancy observed on RKE2? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I don't think this is an RKE2 question. You should probably investigate how Linux container (process namespace) metrics differ from the metrics provided by the .net runtime itself. |
Beta Was this translation helpful? Give feedback.
There is nothing in this project's codebase that provides those metrics. The metrics served by metrics-server are provided by the kubelet's pod metrics by its embedded cadvisor, which in turn retrieves them from containerd via CRI. containerd retrieves them from the Linux kernel. I can't tell you why you'd get different metrics from different clusters; I suspect perhaps you're using different kernels or different container runtimes and the accounting comes out slightly differently?