You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 15, 2021. It is now read-only.
Currently at Spotify a ffwd container is injected into each pod with an admission controller.
It has been slow and cumbersome to rollout new versions of ffwd since it requires recreating all the pods.
An alternative solution to the sidecar approach is to run ffwd as a demon set. Fluentd which ships logs off the GKE nodes is deployment in a similar way. However fluentd get's metadata around the logs based on filename (this was the case in 2018, it might be different now?).
This approach doesn't come without its own unique set of challenges some of which are outlined below.
Would need to map the incoming ip address to a pod to get metadata such as podname. IP addresses could move around quickly and this would need to be kept fresh. We could watch for pod change events and use that as a cache buster.
Does the UDP buffer need to be sized even higher? Currently each pod on a node get's his own ffwd/udp buffer.
Part of this issue should be doing the discovery work to see how feasible it would be.
The text was updated successfully, but these errors were encountered:
Would need to map the incoming ip address to a pod to get metadata such as podname. IP addresses could move around quickly and this would need to be kept fresh. We could watch for pod change events and use that as a cache buster
This sounds like a recipe for various race conditions. How do you feel about requiring the application to extract all metadata it needs and be responsible for decorating the metrics it sends to the node local FFWD instead? This was the plan with the metrics-api and the reason we added the TagExtractor. We can/should(?) expand this to also extract resource identifiers(#155).
Does the UDP buffer need to be sized even higher? Currently each pod on a node get's his own ffwd/udp buffer.
Is there a reason we want to keep using UDP or does it makes sense to switch to a more reliable transport? For instance we could convert the metrics-api into a gRPC API that FFWD would implement and migrate clients over to that? The communication would still be over localhost.
hexedpackets
changed the title
[k8s] running ffwd as a daemon set
[k8s] running ffwd as a DaemonSet
May 13, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently at Spotify a ffwd container is injected into each pod with an admission controller.
It has been slow and cumbersome to rollout new versions of ffwd since it requires recreating all the pods.
An alternative solution to the sidecar approach is to run ffwd as a demon set. Fluentd which ships logs off the GKE nodes is deployment in a similar way. However fluentd get's metadata around the logs based on filename (this was the case in 2018, it might be different now?).
This approach doesn't come without its own unique set of challenges some of which are outlined below.
Part of this issue should be doing the discovery work to see how feasible it would be.
The text was updated successfully, but these errors were encountered: