Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip the return of err when the error in veth create is due to existi... #76

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

sar772004
Copy link
Contributor

@sar772004 sar772004 commented Apr 28, 2023

…ng links, its possibly added by the other end pod

Add more logs in cmdAdd to debug Veth create code ( show local and peer pod info) and make it easier to traverse /var/log/meshnet-cni.log
@networkop
@Cerebus

…ng links, its possibly added by the other end pod

Add more logs in cmdAdd to debug Veth create code ( show local and peer pod info) and make it easier to traverse /var/log/meshnet-cni.log
@kingshukdev
Copy link
Contributor

@sar772004 you are getting this while first time "topology bring up" OR it is happening after a delete, when we try to recreate the same topology ?

@sar772004
Copy link
Contributor Author

sar772004 commented Apr 28, 2023

@sar772004 you are getting this while first time "topology bring up" OR it is happening after a delete, when we try to recreate the same topology ?

It happens even after delete and recreate of the topology.

observations:

  1. one of the pods named "test" is higher priority than the pod dut-c.
  2. pod test is faster to reach running state compared to dut-c pod

log.Infof("local pod %s and peer pod %s MY VETH STRUCT: %+v", localPod.Name, peerPod.Name, spew.Sdump(myVeth))
log.Infof("local pod %s and peer pod %s PEER STRUCT: %+v", localPod.Name, peerPod.Name, spew.Sdump(peerVeth))
if strings.Contains(err.Error(), "file exists") {
log.Infof("race condition hit local pod %s and peer pod %s", localPod.Name, peerPod.Name)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you just assume that the link was created correctly here? would there be a case when the link is not configured properly at this stage?
I'm thinking another option is to delete and recreate the link but I'm not sure if it's really needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought was the peer pod which is creating the link in first place ( probably because it has higher priority), will handle any failures.

Also question for you:
can you elaborate the skipping logic,

  1. can we assume if we have higher prio, then we create link ?
  2. Is isSkipped.Response == True when both pods are up at same time or one of them skips based on priority ?

We could handle this condition probably differently based on above

if isSkipped.Response || higherPrio { // If peer POD skipped us (booted before us) or we have a higher priority

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I guess if the peer pod fails to create the link then the error would bubble up and CNI will retry.

wrt your question, the skipping logic is a bit hard to reason about, mainly because of the CNI deletion command (when CNI fails to correctly setup the interfaces or Pod gets deleted by kubelet).

When Pods are being created for the first time, skipped is set by a pod on its peer if the peer is not alive yet. the idea is that once the peer comes up, it will plug in all of the interfaces. So when isSkipped.Response == True, we have to do the work. Priority is a local tie-breaker for the condition when two pods are both coming up at the same time and none of them is skipped.

So, in your case, I'd expect that the existing interface should be detected by this logic

// Checking if interfaces already exist
iExist, _ := koko.IsExistLinkInNS(myVeth.NsName, myVeth.LinkName)

I still don't fully understand what needs to happen for a pod to come up, reach this stage of the code and see this error. I'm fine with this solution as a workaround but ideally, I'd prefer to handle this explicitly in the code logic rather than catching an error.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on your logs, it's clear that the peer pod has come up before us and has skipped us, so priority doesn't come into play, only the skipped flag is true. Which means that the peer pod should have never even attempted to plug in the interfaces, which is why I don't understand why you're getting the file exists error.

time="2023-04-26T21:33:42-04:00" level=info msg="Creating Veth struct with NetNS:/proc/19196/ns/net and intfName: eth5, IP:"
time="2023-04-26T21:33:42-04:00" level=info msg="Does the link already exist? Local:false, Peer:false"
time="2023-04-26T21:33:42-04:00" level=info msg="Neither link exists. Checking if we've been skipped"
time="2023-04-26T21:33:43-04:00" level=info msg="Have we been skipped by our peer test? true"
time="2023-04-26T21:33:43-04:00" level=info msg="DO we have a higher priority? false"
time="2023-04-26T21:33:43-04:00" level=info msg="Peer POD has skipped us or we have a higher priority"
time="2023-04-26T21:33:43-04:00" level=error msg="Error when creating a new VEth pair with koko: failed to rename link koko1662009491 -> e1-21: file exists"

Copy link
Contributor

@manomugdha manomugdha Apr 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sar772004, I tried to reproduce this issue with a full mesh topology of 8 pods (and 28 links) in a kind cluster with a single worker node. But I am not able to reproduce this issue in my setup. It does not happen during new topology creation. I tried to delete/add/replace a pod and I get a different issue which is expected but I dont see this issue. Can you please share following information?

  • steps (along with commands) to reproduce this issue
  • topology file if possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try this early next week. Thanks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sar772004 since PR #80 is merged now, you can play with the networkop/meshnet. No need to use my branch that I will delete anytime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried latest meshnet and there is some issue with pods and the links between them, I did not have the bandwidth to check in detail. But its the same setup i mentioned above. Attaching the latest logs and yaml for your reference.
meshnet-cni.log

meshnet_links.yaml.txt

Copy link
Contributor

@manomugdha manomugdha May 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sar772004, can you please share the corresponding meshnet daemon logs from the node from where you have collected meshnet-cni.log? It is required to see the actual error message why 'Skip' fails.From the logs, it seems that few pods e.g. test, dut-bridge, went through failure but ultimately that failed pods came up.
What are the issues you are observing from application point of view like pod at init or imageerr, etc?
I ran this topology both in vxlan and grpc mode with single and multiple nodes around 10 times. It went fine. I ran in kind cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @manomugdha Sorry long weekend here, i don't have the setup at the moment, But the issue is when the links are all "veth" . i.e all the pods are in single compute.

All pods had come up eventually. but my use case was failing due to missing synchronization between active/standby CPU pods, some of these backplane links run through dut_bridge pod (acts as a bridge). And the software running on the CPU pod expects the links to be up before it starts.. I do have startup probes on these pods to confirm the backplane link status, but its possible one end of the bridge link was not up.

I will try to set it up again and get the meshnet daemon logs.

NOTE: this worked when i was using #76 itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants