Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

physical worker node memory leak #12623

Closed
smhakcan opened this issue Jan 2, 2025 · 3 comments
Closed

physical worker node memory leak #12623

smhakcan opened this issue Jan 2, 2025 · 3 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@smhakcan
Copy link

smhakcan commented Jan 2, 2025

Hello,
k top pods -n ingress-nginx
ingress-nginx-controller-gqgdh 2116m 19531Mi -- this is work without vm
ingress-nginx-controller-hvtmv 1498m 2526Mi -- this is vm on hyper-V


When ingres-nginx start on physical worker node on k8s memory useage as above.
How am I fix ?


ingress nginx image : ingress-nginx/controller:v1.10.1

ingress nginx configmap data:

allow-snippet-annotations: "false"
client-body-buffer-size: 16m
client-header-buffer-size: 8k
client_max_body_size: 150m
enable-opentelemetry: "true"
enable-real-ip: "true"
forwarded-for-header: X-Forwarded-For
keep-alive-requests: "10000"
large-client-header-buffers: 4 96k
max-worker-connections: "65536"
opentelemetry-config: /etc/nginx/opentelemetry.toml
opentelemetry-operation-name: HTTP $request_method $service_name $uri
opentelemetry-trust-incoming-span: "true"
otel-max-export-batch-size: "512"
otel-max-queuesize: "2048"
otel-sampler: AlwaysOn
otel-sampler-parent-based: "false"
otel-sampler-ratio: "1.0"
otel-schedule-delay-millis: "5000"
otel-service-name: nginx-proxy
otlp-collector-host: otel.opentelemetry.svc.cluster.local
otlp-collector-port: "4317"
preserve-trailing-slash: "true"
proxy-add-original-uri-header: "true"
proxy-body-size: 150m
proxy-buffer-size: 128k
proxy-buffering: "off"
proxy-connect-timeout: "900"
proxy-read-timeout: "900"
proxy-send-timeout: "900"
real-ip-header: X-Real-IP
ssl-redirect: "true"
upstream-keepalive-requests: "2000"
use-forwarded-headers: "true"

Thank you for help..

@smhakcan smhakcan added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 2, 2025
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 2, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@Gacko
Copy link
Member

Gacko commented Jan 3, 2025

Please use a more recent version of the controller and try to reproduce this issue. Thank you!

@Gacko
Copy link
Member

Gacko commented Jan 3, 2025

Ah, as far as I can tell you didn't pin the amount of worker processes. Therefore and by default it's set to auto which just means "as much as you have CPUs". Your physical node probably has a lot more CPUs than you VM. Each of these worker processes allocates a specific amount of memory and therefore the overall memory consumption increases, especially in bigger clusters.

Please pin the amount of worker processes by using the worker-processes ConfigMap option: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#worker-processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants