-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue with 4.21f ceos #10
Comments
update self.command in class CEOS to the following resolved the issue to me. ['/sbin/init', 'systemd.setenv=INTFTYPE=eth', 'systemd.setenv=ETBA=1', 'systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1', 'systemd.setenv=CEOS=1', 'systemd.setenv=EOS_PLATFORM=ceoslab', 'systemd.setenv=container=docker'] |
sounds right. do you want to do a pull request? |
it seems there are still some issue with dynamic routing protocol, unable to bring up ospf, i'm working with arista TAC in case 199976 , will update here once I have more info. sw-1(config-router-ospf)#end % Internal error % Internal error |
i think this is because you need to have at least one ethernet interface in up/up state. |
nope, i do have L3 interface up/up and can ping each other but ospf can't be brought up, show logging says rib is continuously crashing. BUG397410 affects all EOS versions. Our Engineering team is working on this bug fix. but it seems no such issue on the old version, 4.20.5F, i will update once i have more info. |
( i did encounter the scenario you mentioned when no ethernet interface in ceos is showing up, in that case i can't even enable 'ip routing', but this time it seems different, i can enable 'ip routing' at least ) |
I updated the self.command, but still getting the issue for version 4.22.1F kubectl exec -it arista01-5f4dcbdf77-99h9x Cli |
you can check that command over here https://github.com/networkop/docker-topo/blob/master/bin/docker-topo#L416 |
@networkop - Still getting this issue while running it in a k8s cluster. Have no issues when I launch it as separate docker container. I tried different arista ceos images and all have prb when launched in K8s cluster. I could get to the bash but not Cli. I did "ps -ef" to check all processes running after logging in to bash but see no process running. But in the one I launched as separate docker container, I could see all the processes running. bash-4.3# ps -ef kubectl describe pod arista05-bb8dcbf6b-mkn7m Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "arista05-bb8dcbf6b-mkn7m": pod has unbound immediate PersistentVolumeClaims |
I can't see where the error is. @vparames86 can you try launching it as a standalone pod, i.e. outside of k8s-topo? |
@networkop - Even the standalone pod doesn't seem to work for me. This is the yaml I used. I put all the vars in COMMANDS and also tried putting the remaining ones other than /sbin/init under ARGS but doesn't seem to work. apiVersion: v1 Could you please share a pod.yaml that works for you? |
this one worked for me
|
ah sec_context = client.V1SecurityContext(privileged=True) this is missing for create_nsm function. This most probably might be the issue. |
@networkop - This worked thanks for your help |
getting below error when start pod
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"Cli\": executable file not found in $PATH": unknown
from arista recent readme of ceos-lab,
it seems there is need to pass system some systemd.setenv arg along with /sbin/init but looking at Class CEOS, it seems only environment variables are passed. (I tried to concat in self.command but it doesn't work)
create docker instances with needed environment variables
docker create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=docker -i -t ceosimage:4.21.0F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker
The text was updated successfully, but these errors were encountered: