Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build_and_start_camera_capturer - entered thread 'main' panicked at 'called Result::unwrap() #445

Closed
ElijahMendez opened this issue Feb 9, 2022 · 4 comments
Labels
bug Something isn't working stale

Comments

@ElijahMendez
Copy link

Describe the bug
When deploying the demo cameras, the video pods enter a crash loop.

Output of kubectl get pods,akrii,akric -o wide

NAME                                              READY   STATUS             RESTARTS        AGE    IP             NODE         NOMINATED NODE   READINESS GATES
pod/akri-controller-deployment-7796bc4f97-zh664   1/1     Running            0               11m    10.1.116.169   group8edge   <none>           <none>
pod/akri-udev-discovery-daemonset-r4g27           1/1     Running            0               11m    10.1.116.166   group8edge   <none>           <none>
pod/akri-agent-daemonset-mmcb8                    1/1     Running            0               11m    10.1.116.129   group8edge   <none>           <none>
pod/akri-video-streaming-app-688456678b-hsrns     1/1     Running            0               9m9s   10.1.116.168   group8edge   <none>           <none>
pod/akri-udev-video-090db9-pod                    0/1     CrashLoopBackOff   6 (5m4s ago)    11m    10.1.116.155   group8edge   <none>           <none>
pod/akri-udev-video-da46e8-pod                    0/1     CrashLoopBackOff   6 (4m56s ago)   11m    10.1.116.171   group8edge   <none>           <none>

NAME                                      CONFIG            SHARED   NODES            AGE
instance.akri.sh/akri-udev-video-090db9   akri-udev-video   false    ["group8edge"]   11m
instance.akri.sh/akri-udev-video-da46e8   akri-udev-video   false    ["group8edge"]   11m

NAME                                    CAPACITY   AGE
configuration.akri.sh/akri-udev-video   1          11m

Kubernetes Version: [e.g. Native Kubernetes 1.19, MicroK8s 1.19, Minikube 1.19, K3s]
1.23.3

To Reproduce
Not sure, but issue reoccured after removal and reinstallation.
Steps to reproduce the behavior:

  1. Follow the guide up until demo cameras are deployed

Expected behavior
Video pods would not crashloop

Logs (please share snips of applicable logs)

  • To get the logs of any pod, run kubectl get logs <pod name>
 sudo microk8s kubectl get logs akri-udev-video-090db9-pod
error: the server doesn't have a resource type "logs"

From K8s dashboard:

akri.sh udev_broker ... env_logger::init finished
[2022-02-09T15:28:24Z INFO udev_video_broker] akri.sh Udev Broker logging started
[2022-02-09T15:28:24Z TRACE udev_video_broker] get_video_devnode - getting devnode
[2022-02-09T15:28:24Z TRACE udev_video_broker] get_video_devnode - found devnode /dev/video2
[2022-02-09T15:28:24Z TRACE udev_video_broker::util::camera_capturer] build_and_start_camera_capturer - entered
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 21, kind: IsADirectory, message: "Is a directory" }', samples/brokers/udev-video-broker/src/util/camera_capturer.rs:31:54
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Video 1:


akri.sh udev_broker ... env_logger::init
akri.sh udev_broker ... env_logger::init finished
[2022-02-09T15:33:53Z INFO udev_video_broker] akri.sh Udev Broker logging started
[2022-02-09T15:33:53Z TRACE udev_video_broker] get_video_devnode - getting devnode
[2022-02-09T15:33:53Z TRACE udev_video_broker] get_video_devnode - found devnode /dev/video1
[2022-02-09T15:33:53Z TRACE udev_video_broker::util::camera_capturer] build_and_start_camera_capturer - entered
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 21, kind: IsADirectory, message: "Is a directory" }', samples/brokers/udev-video-broker/src/util/camera_capturer.rs:31:54
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
  • To get the logs of a pod that has already terminated, kubectl get logs <pod name> --previous
  • If you believe that the problem is with the Kubelet, run journalctl -u kubelet or journalctl -u snap.microk8s.daemon-kubelet if you are using a MicroK8s cluster.

Additional context

Performed suggested clean up actions and went through reinstallation but pods still crash looped. Looks to be an issue with /src/util/camera_capturer being a directory but I didn't want to start deleting random files/directories.

@ElijahMendez ElijahMendez added the bug Something isn't working label Feb 9, 2022
@kate-goldenring
Copy link
Contributor

Hi @ElijahMendez, can you clarify what part of the demo is "up until cameras are deployed". It looks like the mock cameras are not set up correctly. There might be something wrong with the compatability of the v4l2loopback kernel module and the machines you are using. Are you running on Ubuntu? If not, that may be the issue -- i have not tested the mock cameras on other distros.

If so, can you try uninstalling the kernel module sudo modprobe -r v4l2loopback and reinserting them sudo modprobe v4l2loopback exclusive_caps=1 video_nr=1,2?

@kate-goldenring
Copy link
Contributor

@ElijahMendez were you able to test out removing and recreating the cameras?

@github-actions
Copy link
Contributor

github-actions bot commented Jul 5, 2022

Issue has been automatically marked as stale due to inactivity for 90 days. Update the issue to remove label, otherwise it will be automatically closed.

@jjonmueller
Copy link

I see the same issue: #739. Running this within WLS2 on Windows 11.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
Status: Done
Development

No branches or pull requests

3 participants