Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate veth link status across VxLAN and gRPC overlays #85

Open
Cerebus opened this issue Aug 27, 2024 · 2 comments
Open

Propagate veth link status across VxLAN and gRPC overlays #85

Cerebus opened this issue Aug 27, 2024 · 2 comments

Comments

@Cerebus
Copy link
Contributor

Cerebus commented Aug 27, 2024

When a pod is deleted, neighboring pods on the same host will see the connecting veth go LOWERLAYERDOWN. This is the normal behavior of veth pairs, and IMHO is the proper behavior for meshnet as it models the behavior of a physical link failure.

When the neighboring pod is on a different host, the connecting veth will stay UP/LOWER_UP. This is b/c the veth is paired to a bridge & VxLAN interface, or to the local meshnetd instance. None of that structure has changed state.

This changes the behavior of emulated networks. The neighboring pod will retain all the routes assigned to the interface until a routing process changes them. Even when changed, the direct attached network prefix route will remain (e.g., under OSPF, the routes that traverse the dead link are re-routed, but the route for the dead link itself is not changed -- and if OSPF is advertising attached networks, this link will remain in all the routing tables in the network).

I think meshnet should propagate link status changes across VxLAN and gRPC overlays, so as to keep consistent behavior in all cases.

@kingshukdev
Copy link
Contributor

I agree with your comment @Cerebus . Might get some cycle late September. Will try to add this.

@Cerebus
Copy link
Contributor Author

Cerebus commented Aug 28, 2024

Maybe as an option per iface? Worth discussion. I can see situations where non-propagation might be useful, but maybe those can be covered with netem rate 0.

On second thought, propagation in all cases. When pods share a host there's no way I know of to keep the veth pair up after pod deletion, short of moving the dangling veth to another namespace or bridge, which also seems less than stellar. Emulation drivers will just have to chose traffic shaping vs. node deletion depending on the behavior they want to model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants