Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip the return of err when the error in veth create is due to existi... #76

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 26 additions & 20 deletions plugin/meshnet.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"

"strings"

mpb "github.com/networkop/meshnet-cni/daemon/proto/meshnet/v1beta1"
"github.com/networkop/meshnet-cni/utils/wireutil"
)
Expand Down Expand Up @@ -245,76 +247,80 @@ func cmdAdd(args *skel.CmdArgs) error {
}

isAlive := peerPod.SrcIp != "" && peerPod.NetNs != ""
log.Infof("Add: Is peer pod %s alive?: %t", peerPod.Name, isAlive)
log.Infof("Add: Is peer pod %s alive?: %t, local pod %s", peerPod.Name, isAlive, localPod.Name)

if isAlive { // This means we're coming up AFTER our peer so things are pretty easy
log.Infof("Add: Peer pod %s is alive", peerPod.Name)
log.Infof("Add: Peer pod %s is alive, local pod %s", peerPod.Name, localPod.Name)
if peerPod.SrcIp == localPod.SrcIp { // This means we're on the same host
log.Infof("Add: %s and %s are on the same host", localPod.Name, peerPod.Name)
// Creating koko's Veth struct for peer intf
peerVeth, err := makeVeth(peerPod.NetNs, link.PeerIntf, link.PeerIp)
if err != nil {
log.Errorf("Add: Failed to build koko Veth struct")
log.Errorf("Add: Failed to build koko Veth struct local pod %s and peer pod %s", localPod.Name, peerPod.Name)
return err
}

// Checking if interfaces already exist
iExist, _ := koko.IsExistLinkInNS(myVeth.NsName, myVeth.LinkName)
pExist, _ := koko.IsExistLinkInNS(peerVeth.NsName, peerVeth.LinkName)

log.Infof("Does the link already exist? Local:%t, Peer:%t", iExist, pExist)
log.Infof("Does the link already exist? local pod %s and peer pod %s, Local:%t, Peer:%t", localPod.Name, peerPod.Name, iExist, pExist)
if iExist && pExist { // If both link exist, we don't need to do anything
log.Info("Both interfaces already exist in namespace")
log.Infof("Both interfaces already exist in namespace local pod %s and peer pod %s", localPod.Name, peerPod.Name)
} else if !iExist && pExist { // If only peer link exists, we need to destroy it first
log.Info("Only peer link exists, removing it first")
log.Infof("Only peer link exists, removing it first local pod %s and peer pod %s", localPod.Name, peerPod.Name)
if err := peerVeth.RemoveVethLink(); err != nil {
log.Errorf("Failed to remove a stale interface %s of my peer %s", peerVeth.LinkName, link.PeerPod)
return err
}
log.Infof("Adding the new veth link to both pods")
log.Infof("Adding the new veth link to both pods local pod %s and peer pod %s", localPod.Name, peerPod.Name)
if err = koko.MakeVeth(*myVeth, *peerVeth); err != nil {
log.Errorf("Error creating VEth pair after peer link remove: %s", err)
log.Errorf("Error creating VEth pair after peer link remove: %s, local pod %s and peer pod %s", err, localPod.Name, peerPod.Name)
return err
}
} else if iExist && !pExist { // If only local link exists, we need to destroy it first
log.Infof("Only local link exists, removing it first")
log.Infof("Only local link exists, removing it first local pod %s and peer pod %s", localPod.Name, peerPod.Name)
if err := myVeth.RemoveVethLink(); err != nil {
log.Errorf("Failed to remove a local stale VEth interface %s for pod %s", myVeth.LinkName, localPod.Name)
return err
}
log.Infof("Adding the new veth link to both pods")
log.Infof("Adding the new veth link to both pods local pod %s and peer pod %s", localPod.Name, peerPod.Name)
if err = koko.MakeVeth(*myVeth, *peerVeth); err != nil {
log.Errorf("Error creating VEth pair after local link remove: %s", err)
log.Errorf("Error creating VEth pair after local link remove: %s, local pod %s and peer pod %s", err, localPod.Name, peerPod.Name)
return err
}
} else { // if neither link exists, we have two options
log.Infof("Neither link exists. Checking if we've been skipped")
log.Infof("Neither link exists. Checking if we've been skipped local pod %s and peer pod %s", localPod.Name, peerPod.Name)
isSkipped, err := meshnetClient.IsSkipped(ctx, &mpb.SkipQuery{
Pod: localPod.Name,
Peer: peerPod.Name,
KubeNs: string(cniArgs.K8S_POD_NAMESPACE),
})
if err != nil {
log.Errorf("Failed to read skipped status from our peer")
log.Errorf("Local pod %s Failed to read skipped status from our peer %s", localPod.Name, peerPod.Name)
return err
}
log.Infof("Have we been skipped by our peer %s? %t", peerPod.Name, isSkipped.Response)
log.Infof("Have we %s been skipped by our peer %s? %t", localPod.Name, peerPod.Name, isSkipped.Response)

// Comparing names to determine higher priority
higherPrio := localPod.Name > peerPod.Name
log.Infof("DO we have a higher priority? %t", higherPrio)
log.Infof("DO we %s have a higher priority than peer %s ? %t", localPod.Name, peerPod.Name, higherPrio)

if isSkipped.Response || higherPrio { // If peer POD skipped us (booted before us) or we have a higher priority
log.Infof("Peer POD has skipped us or we have a higher priority")
log.Infof("Peer POD %s has skipped us or we %s have a higher priority", peerPod.Name, localPod.Name)
if err = koko.MakeVeth(*myVeth, *peerVeth); err != nil {
log.Errorf("Error when creating a new VEth pair with koko: %s", err)
log.Infof("MY VETH STRUCT: %+v", spew.Sdump(myVeth))
log.Infof("PEER STRUCT: %+v", spew.Sdump(peerVeth))
return err
log.Infof("local pod %s and peer pod %s MY VETH STRUCT: %+v", localPod.Name, peerPod.Name, spew.Sdump(myVeth))
log.Infof("local pod %s and peer pod %s PEER STRUCT: %+v", localPod.Name, peerPod.Name, spew.Sdump(peerVeth))
if strings.Contains(err.Error(), "file exists") {
log.Infof("race condition hit local pod %s and peer pod %s", localPod.Name, peerPod.Name)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you just assume that the link was created correctly here? would there be a case when the link is not configured properly at this stage?
I'm thinking another option is to delete and recreate the link but I'm not sure if it's really needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought was the peer pod which is creating the link in first place ( probably because it has higher priority), will handle any failures.

Also question for you:
can you elaborate the skipping logic,

  1. can we assume if we have higher prio, then we create link ?
  2. Is isSkipped.Response == True when both pods are up at same time or one of them skips based on priority ?

We could handle this condition probably differently based on above

if isSkipped.Response || higherPrio { // If peer POD skipped us (booted before us) or we have a higher priority

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I guess if the peer pod fails to create the link then the error would bubble up and CNI will retry.

wrt your question, the skipping logic is a bit hard to reason about, mainly because of the CNI deletion command (when CNI fails to correctly setup the interfaces or Pod gets deleted by kubelet).

When Pods are being created for the first time, skipped is set by a pod on its peer if the peer is not alive yet. the idea is that once the peer comes up, it will plug in all of the interfaces. So when isSkipped.Response == True, we have to do the work. Priority is a local tie-breaker for the condition when two pods are both coming up at the same time and none of them is skipped.

So, in your case, I'd expect that the existing interface should be detected by this logic

// Checking if interfaces already exist
iExist, _ := koko.IsExistLinkInNS(myVeth.NsName, myVeth.LinkName)

I still don't fully understand what needs to happen for a pod to come up, reach this stage of the code and see this error. I'm fine with this solution as a workaround but ideally, I'd prefer to handle this explicitly in the code logic rather than catching an error.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on your logs, it's clear that the peer pod has come up before us and has skipped us, so priority doesn't come into play, only the skipped flag is true. Which means that the peer pod should have never even attempted to plug in the interfaces, which is why I don't understand why you're getting the file exists error.

time="2023-04-26T21:33:42-04:00" level=info msg="Creating Veth struct with NetNS:/proc/19196/ns/net and intfName: eth5, IP:"
time="2023-04-26T21:33:42-04:00" level=info msg="Does the link already exist? Local:false, Peer:false"
time="2023-04-26T21:33:42-04:00" level=info msg="Neither link exists. Checking if we've been skipped"
time="2023-04-26T21:33:43-04:00" level=info msg="Have we been skipped by our peer test? true"
time="2023-04-26T21:33:43-04:00" level=info msg="DO we have a higher priority? false"
time="2023-04-26T21:33:43-04:00" level=info msg="Peer POD has skipped us or we have a higher priority"
time="2023-04-26T21:33:43-04:00" level=error msg="Error when creating a new VEth pair with koko: failed to rename link koko1662009491 -> e1-21: file exists"

Copy link
Contributor

@manomugdha manomugdha Apr 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sar772004, I tried to reproduce this issue with a full mesh topology of 8 pods (and 28 links) in a kind cluster with a single worker node. But I am not able to reproduce this issue in my setup. It does not happen during new topology creation. I tried to delete/add/replace a pod and I get a different issue which is expected but I dont see this issue. Can you please share following information?

  • steps (along with commands) to reproduce this issue
  • topology file if possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try this early next week. Thanks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sar772004 since PR #80 is merged now, you can play with the networkop/meshnet. No need to use my branch that I will delete anytime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried latest meshnet and there is some issue with pods and the links between them, I did not have the bandwidth to check in detail. But its the same setup i mentioned above. Attaching the latest logs and yaml for your reference.
meshnet-cni.log

meshnet_links.yaml.txt

Copy link
Contributor

@manomugdha manomugdha May 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sar772004, can you please share the corresponding meshnet daemon logs from the node from where you have collected meshnet-cni.log? It is required to see the actual error message why 'Skip' fails.From the logs, it seems that few pods e.g. test, dut-bridge, went through failure but ultimately that failed pods came up.
What are the issues you are observing from application point of view like pod at init or imageerr, etc?
I ran this topology both in vxlan and grpc mode with single and multiple nodes around 10 times. It went fine. I ran in kind cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @manomugdha Sorry long weekend here, i don't have the setup at the moment, But the issue is when the links are all "veth" . i.e all the pods are in single compute.

All pods had come up eventually. but my use case was failing due to missing synchronization between active/standby CPU pods, some of these backplane links run through dut_bridge pod (acts as a bridge). And the software running on the CPU pod expects the links to be up before it starts.. I do have startup probes on these pods to confirm the backplane link status, but its possible one end of the bridge link was not up.

I will try to set it up again and get the meshnet daemon logs.

NOTE: this worked when i was using #76 itself.

} else {
return err
}
}
} else { // peerPod has higherPrio and hasn't skipped us
// In this case we do nothing, since the pod with a higher IP is supposed to connect veth pair
log.Infof("Doing nothing, expecting peer pod %s to connect veth pair", peerPod.Name)
log.Infof("Doing nothing, expecting peer pod %s to connect veth pair with local pod %s", peerPod.Name, localPod.Name)
continue
}
}
Expand Down