Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No secrets found in helm-toolkit during "make" and ceph-mon pod is going in "CrashLoopBackOff" #74

Open
abhirajb opened this issue Jan 31, 2019 · 3 comments

Comments

@abhirajb
Copy link

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:

Kubernetes v1.13.0
Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.0-dirty", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"dirty", BuildDate:"2019-01-31T06:07:25Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.0-dirty", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"dirty", BuildDate:"2019-01-31T06:07:25Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Helm
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

What happened:
I am trying to install Ceph using helm charts in k8s cluster and followed this document http://docs.ceph.com/docs/master/start/kube-helm/ and facing this 2 major issues

  1. If we run "make" it shows no secrets found in helm-toolkit

  2. After installing step i.e.

     helm install --name=ceph local/ceph --namespace=ceph -f  ceph-overrides.yaml
    

ceph-mon pod is going in "CrashLoopBackOff" state

    NAMESPACE          NAME                                        READY   STATUS             RESTARTS   AGE
   ceph               ceph-mds-85b4fbb478-26sw8                   0/1     Pending            0          4h56m
   ceph               ceph-mds-keyring-generator-w6xqz            0/1     Completed          0          4h56m
   ceph               ceph-mgr-588577d89f-rrd84                   0/1     Init:0/2           0          4h56m
   ceph               ceph-mgr-keyring-generator-sg75h            0/1     Completed          0          4h56m
   ceph               ceph-mon-82rtj                              2/3     CrashLoopBackOff   57         4h56m
   ceph               ceph-mon-check-549b886885-x4m7d             0/1     Init:0/2           0                 4h56m
   ceph               ceph-mon-keyring-generator-d5txp            0/1     Completed          0          4h56m
   ceph               ceph-namespace-client-key-generator-rqd2m   0/1     Completed          0          4h56m
   ceph               ceph-osd-dev-sdb-9fpd9                      0/1     Init:0/3           0          4h56m
   ceph               ceph-osd-keyring-generator-m44l4            0/1     Completed          0          4h56m
   ceph               ceph-rbd-provisioner-5cf47cf8d5-gwfnj       1/1     Running            0          4h56m
   ceph               ceph-rbd-provisioner-5cf47cf8d5-s9vvg       1/1     Running            0          4h56m
   ceph               ceph-rgw-7b9677854f-9tdwt                   0/1     Pending            0          4h56m
   ceph               ceph-rgw-keyring-generator-chm89            0/1     Completed          0          4h56m
   ceph               ceph-storage-keys-generator-sqwb2           0/1     Completed          0          4h56m
   kube-system        kube-dns-8f7866879-28pq7                    3/3     Running            0          6h2m
   kube-system        tiller-deploy-dbb85cb99-68xmk               1/1     Running            0          104m

What you expected to happen:

We want ceph-mon pod in running condition so that we can create secrets and keyrings as without ceph-mon in running state we can't create secrets. Please do let me know if I am miss anything.

Anything else we need to know:

@miahwk
Copy link

miahwk commented Feb 22, 2019

I happend to same issue. But do not know how to fix it. I have already tried to clean the "/etc/ceph" and "/var/lib/ceph-helm". But it's still not work.

@jay-johnson
Copy link

I think I am hitting this too. Is it related to missing keyring secrets at least on the device pod?

kubectl describe pod -n ceph ceph-osd-dev-sdd-ck5cg

at the bottom:

Events:
  Type     Reason       Age                    From                          Message
  ----     ------       ----                   ----                          -------
  Normal   Scheduled    5m27s                  default-scheduler             Successfully assigned ceph/ceph-osd-dev-sdd-ck5cg to master3.example.com
  Warning  FailedMount  5m19s (x5 over 5m27s)  kubelet, master3.example.com  MountVolume.SetUp failed for volume "ceph-mon-keyring" : secret "ceph-mon-keyring" not found
  Warning  FailedMount  5m19s (x5 over 5m27s)  kubelet, master3.example.com  MountVolume.SetUp failed for volume "ceph-bootstrap-mds-keyring" : secret "ceph-bootstrap-mds-keyring" not found
  Warning  FailedMount  5m19s (x5 over 5m27s)  kubelet, master3.example.com  MountVolume.SetUp failed for volume "ceph-client-admin-keyring" : secret "ceph-client-admin-keyring" not found
  Warning  FailedMount  5m19s (x5 over 5m27s)  kubelet, master3.example.com  MountVolume.SetUp failed for volume "ceph-bootstrap-rgw-keyring" : secret "ceph-bootstrap-rgw-keyring" not found
  Warning  FailedMount  5m19s (x5 over 5m27s)  kubelet, master3.example.com  MountVolume.SetUp failed for volume "ceph-bootstrap-osd-keyring" : secret "ceph-bootstrap-osd-keyring" not found
  Normal   Pulled       19s (x5 over 115s)     kubelet, master3.example.com  Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine``` 

@jay-johnson
Copy link

jay-johnson commented Feb 24, 2019

kubectl get pods -n ceph | grep dev
ceph-osd-dev-sdd-8kxnt                      0/1     Init:CrashLoopBackOff   5          3m43s
ceph-osd-dev-sdd-9sskz                      0/1     Init:CrashLoopBackOff   5          3m43s
ceph-osd-dev-sdd-kk2nz                      0/1     Init:CrashLoopBackOff   5          3m43s
ceph-osd-dev-sde-lvwnm                      0/1     Init:CrashLoopBackOff   5          3m43s
ceph-osd-dev-sde-mc6gl                      0/1     Init:CrashLoopBackOff   5          3m43s
ceph-osd-dev-sde-nqwvw                      0/1     Init:CrashLoopBackOff   5          3m43s

looks like the device pods have to use a real device on your disk per the ceph-overrides.yaml otherwise you get errors like:

kubectl -n ceph logs ceph-osd-dev-sdd-8kxnt -c osd-prepare-pod | tail -5
+ TIMESTAMP='2019-02-24 00:21:23'
+ echo '2019-02-24 00:21:23  /start_osd.sh: ERROR- The device pointed by OSD_DEVICE (/dev/sdd) doesn'\''t exist !'
2019-02-24 00:21:23  /start_osd.sh: ERROR- The device pointed by OSD_DEVICE (/dev/sdd) doesn't exist !
+ return 0
+ exit 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants