Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad Header Magic on two of three OSD nodes #66

Open
ChrisPhillips-cminion opened this issue Aug 5, 2018 · 4 comments
Open

Bad Header Magic on two of three OSD nodes #66

ChrisPhillips-cminion opened this issue Aug 5, 2018 · 4 comments

Comments

@ChrisPhillips-cminion
Copy link

ChrisPhillips-cminion commented Aug 5, 2018

Is this a request for help?: YES


Is this a BUG REPORT? (choose one):BUG REPORT

Version of Helm and Kubernetes:

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
chrisp@px-chrisp1:~/APICv2018DevInstall$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
helm-ceph

What happened:
When deploying with three OSD nodes two of the nodes (and it changes each time) fail to start with bad magic header. See log below.

ceph-osd-dev-sdb-6mjwq                      0/1       Running     1          2m
ceph-osd-dev-sdb-9rgmd                      1/1       Running     0          2m
ceph-osd-dev-sdb-nrv8v                      0/1       Running     1          2m
chrisp@px-chrisp1:~/$ kubectl logs -nceph po/ceph-osd-dev-sdb-nrv8v  osd-activate-pod
+ export LC_ALL=C
+ LC_ALL=C
+ source variables_entrypoint.sh
++ ALL_SCENARIOS='osd osd_directory osd_directory_single osd_ceph_disk osd_ceph_disk_prepare osd_ceph_disk_activate osd_ceph_activate_journal mgr'
++ : ceph
++ : ceph-config/ceph
++ :
++ : osd_ceph_disk_activate
++ : 1
++ : px-chrisp3
++ : px-chrisp3
++ : /etc/ceph/monmap-ceph
++ : /var/lib/ceph/mon/ceph-px-chrisp3
++ : 0
++ : 0
++ : mds-px-chrisp3
++ : 0
++ : 100
++ : 0
++ : 0
+++ uuidgen
++ : 57ea3535-932a-410f-bf05-f6386e6f9b54
+++ uuidgen
++ : 472f01d3-5053-4ab4-9aef-9435fc48c484
++ : root=default host=px-chrisp3
++ : 0
++ : cephfs
++ : cephfs_data
++ : 8
++ : cephfs_metadata
++ : 8
++ : px-chrisp3
++ :
++ :
++ : 8080
++ : 0
++ : 9000
++ : 0.0.0.0
++ : cephnfs
++ : px-chrisp3
++ : 0.0.0.0
++ CLI_OPTS='--cluster ceph'
++ DAEMON_OPTS='--cluster ceph --setuser ceph --setgroup ceph -d'
++ MOUNT_OPTS='-t xfs -o noatime,inode64'
++ MDS_KEYRING=/var/lib/ceph/mds/ceph-mds-px-chrisp3/keyring
++ ADMIN_KEYRING=/etc/ceph/ceph.client.admin.keyring
++ MON_KEYRING=/etc/ceph/ceph.mon.keyring
++ RGW_KEYRING=/var/lib/ceph/radosgw/px-chrisp3/keyring
++ MGR_KEYRING=/var/lib/ceph/mgr/ceph-px-chrisp3/keyring
++ MDS_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-mds/ceph.keyring
++ RGW_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-rgw/ceph.keyring
++ OSD_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-osd/ceph.keyring
++ OSD_PATH_BASE=/var/lib/ceph/osd/ceph
+ source common_functions.sh
++ set -ex
+ is_available rpm
+ command -v rpm
+ is_available dpkg
+ command -v dpkg
+ OS_VENDOR=ubuntu
+ source /etc/default/ceph
++ TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728
+ case "$CEPH_DAEMON" in
+ OSD_TYPE=activate
+ start_osd
+ [[ ! -e /etc/ceph/ceph.conf ]]
+ '[' 1 -eq 1 ']'
+ [[ ! -e /etc/ceph/ceph.client.admin.keyring ]]
+ case "$OSD_TYPE" in
+ source osd_disk_activate.sh
++ set -ex
+ osd_activate
+ [[ -z /dev/sdb ]]
+ CEPH_DISK_OPTIONS=
+ CEPH_OSD_OPTIONS=
++ blkid -o value -s PARTUUID /dev/sdb1
+ DATA_UUID=5ada2967-155e-4208-86c4-21e7edfae0f1
++ blkid -o value -s PARTUUID /dev/sdb3
++ true
+ LOCKBOX_UUID=
++ dev_part /dev/sdb 2
++ local osd_device=/dev/sdb
++ local osd_partition=2
++ [[ -L /dev/sdb ]]
++ [[ b == [0-9] ]]
++ echo /dev/sdb2
+ JOURNAL_PART=/dev/sdb2
++ readlink -f /dev/sdb
+ ACTUAL_OSD_DEVICE=/dev/sdb
+ udevadm settle --timeout=600
+ [[ -n '' ]]
++ dev_part /dev/sdb 1
++ local osd_device=/dev/sdb
++ local osd_partition=1
++ [[ -L /dev/sdb ]]
++ [[ b == [0-9] ]]
++ echo /dev/sdb1
+ wait_for_file /dev/sdb1
+ timeout 10 bash -c 'while [ ! -e /dev/sdb1 ]; do echo '\''Waiting for /dev/sdb1 to show up'\'' && sleep 1 ; done'
+ chown ceph. /dev/sdb2
+ chown ceph. /var/log/ceph
++ dev_part /dev/sdb 1
++ local osd_device=/dev/sdb
++ local osd_partition=1
++ [[ -L /dev/sdb ]]
++ [[ b == [0-9] ]]
++ echo /dev/sdb1
+ DATA_PART=/dev/sdb1
+ MOUNTED_PART=/dev/sdb1
+ [[ 0 -eq 1 ]]
+ ceph-disk -v --setuser ceph --setgroup disk activate --no-start-daemon /dev/sdb1
main_activate: path = /dev/sdb1
get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
command: Running command: /sbin/blkid -o udev -p /dev/sdb1
command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.BPKzys with options noatime,inode64
command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.BPKzys
activate: Cluster uuid is 56d8e493-f75d-43b0-af75-b5e9ed708416
command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
activate: Cluster name is ceph
activate: OSD uuid is 5ada2967-155e-4208-86c4-21e7edfae0f1
activate: OSD id is 1
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup init
command: Running command: /usr/bin/ceph-detect-init --default sysvinit
activate: Marking with init system none
command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.BPKzys/none
activate: ceph osd.1 data dir is ready at /var/lib/ceph/tmp/mnt.BPKzys
move_mount: Moving mount to final location...
command_check_call: Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-1
command_check_call: Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.BPKzys
++ grep /dev/sdb1 /proc/mounts
++ awk '{print $2}'
++ grep -oh '[0-9]*'
+ OSD_ID=1
++ get_osd_path 1
++ echo /var/lib/ceph/osd/ceph-1/
+ OSD_PATH=/var/lib/ceph/osd/ceph-1/
+ OSD_KEYRING=/var/lib/ceph/osd/ceph-1//keyring
++ df -P -k /var/lib/ceph/osd/ceph-1/
++ tail -1
++ awk '{ d= $2/1073741824 ; r = sprintf("%.2f", d); print r }'
+ OSD_WEIGHT=0.09
+ ceph --cluster ceph --name=osd.1 --keyring=/var/lib/ceph/osd/ceph-1//keyring osd crush create-or-move -- 1 0.09 root=default host=px-chrisp3
create-or-move updated item name 'osd.1' weight 0.09 at location {host=px-chrisp3,root=default} to crush map
+ log SUCCESS
+ '[' -z SUCCESS ']'
++ date '+%F %T'
+ TIMESTAMP='2018-08-05 15:05:13'
+ echo '2018-08-05 15:05:13  /start_osd.sh: SUCCESS'
+ return 0
+ exec /usr/bin/ceph-osd --cluster ceph -f -i 1 --setuser ceph --setgroup disk
2018-08-05 15:05:13  /start_osd.sh: SUCCESS
starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
2018-08-05 15:05:13.822997 7fa44b9aae00 -1 journal do_read_entry(323584): bad header magic
2018-08-05 15:05:13.823010 7fa44b9aae00 -1 journal do_read_entry(323584): bad header magic
2018-08-05 15:05:13.835202 7fa44b9aae00 -1 osd.1 10 log_to_monitors {default=true} 

I have a disk on /dev/sdb with no partitions on all nodes, i even remove the paritions on install to ensure they are not there. I also remove /var/lib/ceph-helm

What you expected to happen:
I would expect all three pods to start.

How to reproduce it (as minimally and precisely as possible):
I followed these instructions https://github.com/helm/helm#docs
Samples from my make script
CleanUp

        ssh $(workerNode) sudo kubeadm reset --force 
        ssh $(workerNode) sudo rm -rf /var/lib/ceph-helm
        ssh $(workerNode) sudo rm -rf /var/kubernetes
        ssh $(workerNode) "( echo d ; echo 1 ; echo d ; echo w ) | sudo fdisk /dev/sdb"
        
        ssh $(workerNode2) sudo kubeadm reset --force  
        ssh $(workerNode2) sudo rm -rf   /var/lib/ceph-helm 
        ssh $(workerNode2) sudo rm -rf /var/kubernetes
        ssh $(workerNode2) "( echo d ; echo 1 ; echo d ; echo w ) | sudo fdisk /dev/sdb"
        ( echo d ; echo 1 ; echo d ; echo w ) | sudo fdisk /dev/sdb
        sudo rm -rf ~/.kube
        sudo rm -rf ~/.helm
        sudo rm -rf /var/kubernetes
        sudo rm -rf  /var/lib/ceph-helm

installCeph

        kubectl create namespace ceph
        $(MAKE) -C ceph-helm/ceph/
        kubectl create -f ceph-helm/ceph/rbac.yaml
        kubectl label node px-chrisp1 ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-mon=enabled ceph-mgr=enabled
        kubectl label node px-chrisp2 ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-mon=enabled ceph-mgr=enabled
        kubectl label node px-chrisp3 ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-mon=enabled ceph-mgr=enabled
        helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml || helm upgrade ceph local/ceph -f ~/ceph-overrides.yaml --recreate-pods

Anything else we need to know:
K8s over three nodes with the master on one of the nodes. Trying to set up an HA K8s cluster (management will be done after ceph).

I have also tried with the latest images from docker hub to no avail.

Full k8s artifacts

NAME                                            READY     STATUS      RESTARTS   AGE
pod/ceph-mds-666578c5f5-plknd                   0/1       Pending     0          3m
pod/ceph-mds-keyring-generator-9qvq9            0/1       Completed   0          3m
pod/ceph-mgr-69c4b4d4bb-ptwv5                   1/1       Running     1          3m
pod/ceph-mgr-keyring-generator-bvqjm            0/1       Completed   0          3m
pod/ceph-mon-7lrrk                              3/3       Running     0          3m
pod/ceph-mon-check-59499b664d-c95nf             1/1       Running     0          3m
pod/ceph-mon-fk2qx                              3/3       Running     0          3m
pod/ceph-mon-h727g                              3/3       Running     0          3m
pod/ceph-mon-keyring-generator-stlsf            0/1       Completed   0          3m
pod/ceph-namespace-client-key-generator-hdqs8   0/1       Completed   0          3m
pod/ceph-osd-dev-sdb-pjw7l                      0/1       Running     1          3m
pod/ceph-osd-dev-sdb-rtgnb                      1/1       Running     0          3m
pod/ceph-osd-dev-sdb-vzbp5                      0/1       Running     1          3m
pod/ceph-osd-keyring-generator-jzj2x            0/1       Completed   0          3m
pod/ceph-rbd-provisioner-5bc57f5f64-2x5kp       1/1       Running     0          3m
pod/ceph-rbd-provisioner-5bc57f5f64-x45mz       1/1       Running     0          3m
pod/ceph-rgw-58c67497fb-sdvkp                   0/1       Pending     0          3m
pod/ceph-rgw-keyring-generator-5b4gr            0/1       Completed   0          3m
pod/ceph-storage-keys-generator-qvhzw           0/1       Completed   0          3m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/ceph-mon   ClusterIP   None          <none>        6789/TCP   3m
service/ceph-rgw   ClusterIP   10.110.4.11   <none>        8088/TCP   3m

NAME                              DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                                      AGE
daemonset.apps/ceph-mon           3         3         3         3            3           ceph-mon=enabled                                   3m
daemonset.apps/ceph-osd-dev-sdb   3         3         1         3            1           ceph-osd-device-dev-sdb=enabled,ceph-osd=enabled   3m

NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ceph-mds               1         1         1            0           3m
deployment.apps/ceph-mgr               1         1         1            1           3m
deployment.apps/ceph-mon-check         1         1         1            1           3m
deployment.apps/ceph-rbd-provisioner   2         2         2            2           3m
deployment.apps/ceph-rgw               1         1         1            0           3m

NAME                                              DESIRED   CURRENT   READY     AGE
replicaset.apps/ceph-mds-666578c5f5               1         1         0         3m
replicaset.apps/ceph-mgr-69c4b4d4bb               1         1         1         3m
replicaset.apps/ceph-mon-check-59499b664d         1         1         1         3m
replicaset.apps/ceph-rbd-provisioner-5bc57f5f64   2         2         2         3m
replicaset.apps/ceph-rgw-58c67497fb               1         1         0         3m

NAME                                            DESIRED   SUCCESSFUL   AGE
job.batch/ceph-mds-keyring-generator            1         1            3m
job.batch/ceph-mgr-keyring-generator            1         1            3m
job.batch/ceph-mon-keyring-generator            1         1            3m
job.batch/ceph-namespace-client-key-generator   1         1            3m
job.batch/ceph-osd-keyring-generator            1         1            3m
job.batch/ceph-rgw-keyring-generator            1         1            3m
job.batch/ceph-storage-keys-generator           1         1            3m```
@ChrisPhillips-cminion
Copy link
Author

ChrisPhillips-cminion commented Aug 5, 2018

I managed to get the log off one of the osd-dev pods before it shutdown,

root@px-chrisp2:/var/log/ceph# cat ceph-osd.0.log 
2018-08-05 15:58:52.844846 7f16b98c7e00  0 set uid:gid to 64045:6 (ceph:disk)
2018-08-05 15:58:52.844863 7f16b98c7e00  0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 11007
2018-08-05 15:58:52.848989 7f16b98c7e00  0 pidfile_write: ignore empty --pid-file
2018-08-05 15:58:52.857166 7f16b98c7e00  0 load: jerasure load: lrc load: isa 
2018-08-05 15:58:52.857389 7f16b98c7e00  0 filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
2018-08-05 15:58:52.858433 7f16b98c7e00  0 filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
2018-08-05 15:58:52.859177 7f16b98c7e00  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2018-08-05 15:58:52.859187 7f16b98c7e00  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2018-08-05 15:58:52.859188 7f16b98c7e00  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice() is disabled via 'filestore splice' config option
2018-08-05 15:58:52.859538 7f16b98c7e00  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2018-08-05 15:58:52.859577 7f16b98c7e00  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_feature: extsize is disabled by conf
2018-08-05 15:58:52.861229 7f16b98c7e00  0 filestore(/var/lib/ceph/osd/ceph-0) start omap initiation
2018-08-05 15:58:52.861271 7f16b98c7e00  0  set rocksdb option compaction_readahead_size = 2097152
2018-08-05 15:58:52.861281 7f16b98c7e00  0  set rocksdb option compression = kNoCompression
2018-08-05 15:58:52.861284 7f16b98c7e00  0  set rocksdb option max_background_compactions = 8
2018-08-05 15:58:52.861308 7f16b98c7e00  0  set rocksdb option compaction_readahead_size = 2097152
2018-08-05 15:58:52.861314 7f16b98c7e00  0  set rocksdb option compression = kNoCompression
2018-08-05 15:58:52.861316 7f16b98c7e00  0  set rocksdb option max_background_compactions = 8
2018-08-05 15:58:52.862110 7f16b98c7e00  4 rocksdb: RocksDB version: 5.4.0

2018-08-05 15:58:52.862122 7f16b98c7e00  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
2018-08-05 15:58:52.862123 7f16b98c7e00  4 rocksdb: Compile date Feb 19 2018
2018-08-05 15:58:52.862124 7f16b98c7e00  4 rocksdb: DB SUMMARY

2018-08-05 15:58:52.862168 7f16b98c7e00  4 rocksdb: CURRENT file:  CURRENT

2018-08-05 15:58:52.862174 7f16b98c7e00  4 rocksdb: IDENTITY file:  IDENTITY

2018-08-05 15:58:52.862181 7f16b98c7e00  4 rocksdb: MANIFEST file:  MANIFEST-000011 size: 157 Bytes

2018-08-05 15:58:52.862184 7f16b98c7e00  4 rocksdb: SST files in /var/lib/ceph/osd/ceph-0/current/omap dir, Total Num: 2, files: 000007.sst 000010.sst 

2018-08-05 15:58:52.862186 7f16b98c7e00  4 rocksdb: Write Ahead Log file in /var/lib/ceph/osd/ceph-0/current/omap: 000012.log size: 150 ; 

2018-08-05 15:58:52.862187 7f16b98c7e00  4 rocksdb:                         Options.error_if_exists: 0
2018-08-05 15:58:52.862189 7f16b98c7e00  4 rocksdb:                       Options.create_if_missing: 1
2018-08-05 15:58:52.862189 7f16b98c7e00  4 rocksdb:                         Options.paranoid_checks: 1
2018-08-05 15:58:52.862190 7f16b98c7e00  4 rocksdb:                                     Options.env: 0x562bfba5e5a0
2018-08-05 15:58:52.862191 7f16b98c7e00  4 rocksdb:                                Options.info_log: 0x562bfd726f80
2018-08-05 15:58:52.862192 7f16b98c7e00  4 rocksdb:                          Options.max_open_files: -1
2018-08-05 15:58:52.862193 7f16b98c7e00  4 rocksdb:                Options.max_file_opening_threads: 16
2018-08-05 15:58:52.862194 7f16b98c7e00  4 rocksdb:                               Options.use_fsync: 0
2018-08-05 15:58:52.862195 7f16b98c7e00  4 rocksdb:                       Options.max_log_file_size: 0
2018-08-05 15:58:52.862196 7f16b98c7e00  4 rocksdb:                  Options.max_manifest_file_size: 18446744073709551615
2018-08-05 15:58:52.862197 7f16b98c7e00  4 rocksdb:                   Options.log_file_time_to_roll: 0
2018-08-05 15:58:52.862197 7f16b98c7e00  4 rocksdb:                       Options.keep_log_file_num: 1000
2018-08-05 15:58:52.862198 7f16b98c7e00  4 rocksdb:                    Options.recycle_log_file_num: 0
2018-08-05 15:58:52.862199 7f16b98c7e00  4 rocksdb:                         Options.allow_fallocate: 1
2018-08-05 15:58:52.862200 7f16b98c7e00  4 rocksdb:                        Options.allow_mmap_reads: 0
2018-08-05 15:58:52.862200 7f16b98c7e00  4 rocksdb:                       Options.allow_mmap_writes: 0
2018-08-05 15:58:52.862201 7f16b98c7e00  4 rocksdb:                        Options.use_direct_reads: 0
2018-08-05 15:58:52.862202 7f16b98c7e00  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
2018-08-05 15:58:52.862202 7f16b98c7e00  4 rocksdb:          Options.create_missing_column_families: 0
2018-08-05 15:58:52.862203 7f16b98c7e00  4 rocksdb:                              Options.db_log_dir: 
2018-08-05 15:58:52.862204 7f16b98c7e00  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/osd/ceph-0/current/omap
2018-08-05 15:58:52.862205 7f16b98c7e00  4 rocksdb:                Options.table_cache_numshardbits: 6
2018-08-05 15:58:52.862206 7f16b98c7e00  4 rocksdb:                      Options.max_subcompactions: 1
2018-08-05 15:58:52.862206 7f16b98c7e00  4 rocksdb:                  Options.max_background_flushes: 1
2018-08-05 15:58:52.862207 7f16b98c7e00  4 rocksdb:                         Options.WAL_ttl_seconds: 0
2018-08-05 15:58:52.862208 7f16b98c7e00  4 rocksdb:                       Options.WAL_size_limit_MB: 0
2018-08-05 15:58:52.862208 7f16b98c7e00  4 rocksdb:             Options.manifest_preallocation_size: 4194304
2018-08-05 15:58:52.862209 7f16b98c7e00  4 rocksdb:                     Options.is_fd_close_on_exec: 1
2018-08-05 15:58:52.862210 7f16b98c7e00  4 rocksdb:                   Options.advise_random_on_open: 1
2018-08-05 15:58:52.862210 7f16b98c7e00  4 rocksdb:                    Options.db_write_buffer_size: 0
2018-08-05 15:58:52.862211 7f16b98c7e00  4 rocksdb:         Options.access_hint_on_compaction_start: 1
2018-08-05 15:58:52.862211 7f16b98c7e00  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 1
2018-08-05 15:58:52.862212 7f16b98c7e00  4 rocksdb:               Options.compaction_readahead_size: 2097152
2018-08-05 15:58:52.862213 7f16b98c7e00  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
2018-08-05 15:58:52.862213 7f16b98c7e00  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
2018-08-05 15:58:52.862214 7f16b98c7e00  4 rocksdb:                      Options.use_adaptive_mutex: 0
2018-08-05 15:58:52.862215 7f16b98c7e00  4 rocksdb:                            Options.rate_limiter: (nil)
2018-08-05 15:58:52.862216 7f16b98c7e00  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
2018-08-05 15:58:52.862217 7f16b98c7e00  4 rocksdb:                          Options.bytes_per_sync: 0
2018-08-05 15:58:52.862218 7f16b98c7e00  4 rocksdb:                      Options.wal_bytes_per_sync: 0
2018-08-05 15:58:52.862218 7f16b98c7e00  4 rocksdb:                       Options.wal_recovery_mode: 2
2018-08-05 15:58:52.862219 7f16b98c7e00  4 rocksdb:                  Options.enable_thread_tracking: 0
2018-08-05 15:58:52.862220 7f16b98c7e00  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
2018-08-05 15:58:52.862220 7f16b98c7e00  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
2018-08-05 15:58:52.862221 7f16b98c7e00  4 rocksdb:             Options.write_thread_max_yield_usec: 100
2018-08-05 15:58:52.862222 7f16b98c7e00  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
2018-08-05 15:58:52.862222 7f16b98c7e00  4 rocksdb:                               Options.row_cache: None
2018-08-05 15:58:52.862223 7f16b98c7e00  4 rocksdb:                              Options.wal_filter: None
2018-08-05 15:58:52.862224 7f16b98c7e00  4 rocksdb:             Options.avoid_flush_during_recovery: 0
2018-08-05 15:58:52.862224 7f16b98c7e00  4 rocksdb:             Options.base_background_compactions: 1
2018-08-05 15:58:52.862225 7f16b98c7e00  4 rocksdb:             Options.max_background_compactions: 8
2018-08-05 15:58:52.862226 7f16b98c7e00  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
2018-08-05 15:58:52.862227 7f16b98c7e00  4 rocksdb:             Options.delayed_write_rate : 16777216
2018-08-05 15:58:52.862227 7f16b98c7e00  4 rocksdb:             Options.max_total_wal_size: 0
2018-08-05 15:58:52.862228 7f16b98c7e00  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
2018-08-05 15:58:52.862229 7f16b98c7e00  4 rocksdb:                   Options.stats_dump_period_sec: 600
2018-08-05 15:58:52.862229 7f16b98c7e00  4 rocksdb: Compression algorithms supported:
2018-08-05 15:58:52.862230 7f16b98c7e00  4 rocksdb: 	Snappy supported: 0
2018-08-05 15:58:52.862231 7f16b98c7e00  4 rocksdb: 	Zlib supported: 0
2018-08-05 15:58:52.862231 7f16b98c7e00  4 rocksdb: 	Bzip supported: 0
2018-08-05 15:58:52.862232 7f16b98c7e00  4 rocksdb: 	LZ4 supported: 0
2018-08-05 15:58:52.862239 7f16b98c7e00  4 rocksdb: 	ZSTD supported: 0
2018-08-05 15:58:52.862240 7f16b98c7e00  4 rocksdb: Fast CRC32 supported: 0
2018-08-05 15:58:52.862529 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/version_set.cc:2609] Recovering from manifest file: MANIFEST-000011

2018-08-05 15:58:52.862626 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/column_family.cc:407] --------------- Options for column family [default]:

2018-08-05 15:58:52.862634 7f16b98c7e00  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
2018-08-05 15:58:52.862635 7f16b98c7e00  4 rocksdb:           Options.merge_operator: 
2018-08-05 15:58:52.862636 7f16b98c7e00  4 rocksdb:        Options.compaction_filter: None
2018-08-05 15:58:52.862638 7f16b98c7e00  4 rocksdb:        Options.compaction_filter_factory: None
2018-08-05 15:58:52.862639 7f16b98c7e00  4 rocksdb:         Options.memtable_factory: SkipListFactory
2018-08-05 15:58:52.862640 7f16b98c7e00  4 rocksdb:            Options.table_factory: BlockBasedTable
2018-08-05 15:58:52.862657 7f16b98c7e00  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bfd244188)
  cache_index_and_filter_blocks: 1
  cache_index_and_filter_blocks_with_high_priority: 1
  pin_l0_filter_and_index_blocks_in_cache: 1
  index_type: 0
  hash_index_allow_collision: 1
  checksum: 1
  no_block_cache: 0
  block_cache: 0x562bfd53fea0
  block_cache_name: LRUCache
  block_cache_options:
    capacity : 134217728
    num_shard_bits : 4
    strict_capacity_limit : 0
    high_pri_pool_ratio: 0.000
  block_cache_compressed: (nil)
  persistent_cache: (nil)
  block_size: 4096
  block_size_deviation: 10
  block_restart_interval: 16
  index_block_restart_interval: 1
  filter_policy: rocksdb.BuiltinBloomFilter
  whole_key_filtering: 1
  format_version: 2

2018-08-05 15:58:52.862664 7f16b98c7e00  4 rocksdb:        Options.write_buffer_size: 67108864
2018-08-05 15:58:52.862665 7f16b98c7e00  4 rocksdb:  Options.max_write_buffer_number: 2
2018-08-05 15:58:52.862666 7f16b98c7e00  4 rocksdb:          Options.compression: NoCompression
2018-08-05 15:58:52.862667 7f16b98c7e00  4 rocksdb:                  Options.bottommost_compression: Disabled
2018-08-05 15:58:52.862667 7f16b98c7e00  4 rocksdb:       Options.prefix_extractor: nullptr
2018-08-05 15:58:52.862668 7f16b98c7e00  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
2018-08-05 15:58:52.862669 7f16b98c7e00  4 rocksdb:             Options.num_levels: 7
2018-08-05 15:58:52.862670 7f16b98c7e00  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
2018-08-05 15:58:52.862671 7f16b98c7e00  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
2018-08-05 15:58:52.862671 7f16b98c7e00  4 rocksdb:            Options.compression_opts.window_bits: -14
2018-08-05 15:58:52.862672 7f16b98c7e00  4 rocksdb:                  Options.compression_opts.level: -1
2018-08-05 15:58:52.862673 7f16b98c7e00  4 rocksdb:               Options.compression_opts.strategy: 0
2018-08-05 15:58:52.862674 7f16b98c7e00  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
2018-08-05 15:58:52.862674 7f16b98c7e00  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
2018-08-05 15:58:52.862675 7f16b98c7e00  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
2018-08-05 15:58:52.862675 7f16b98c7e00  4 rocksdb:              Options.level0_stop_writes_trigger: 36
2018-08-05 15:58:52.862676 7f16b98c7e00  4 rocksdb:                   Options.target_file_size_base: 67108864
2018-08-05 15:58:52.862676 7f16b98c7e00  4 rocksdb:             Options.target_file_size_multiplier: 1
2018-08-05 15:58:52.862677 7f16b98c7e00  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
2018-08-05 15:58:52.862678 7f16b98c7e00  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 0
2018-08-05 15:58:52.862678 7f16b98c7e00  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
2018-08-05 15:58:52.862681 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
2018-08-05 15:58:52.862681 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
2018-08-05 15:58:52.862682 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
2018-08-05 15:58:52.862683 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
2018-08-05 15:58:52.862684 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
2018-08-05 15:58:52.862684 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
2018-08-05 15:58:52.862685 7f16b98c7e00  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
2018-08-05 15:58:52.862685 7f16b98c7e00  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
2018-08-05 15:58:52.862686 7f16b98c7e00  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
2018-08-05 15:58:52.862696 7f16b98c7e00  4 rocksdb:                        Options.arena_block_size: 8388608
2018-08-05 15:58:52.862698 7f16b98c7e00  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
2018-08-05 15:58:52.862699 7f16b98c7e00  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
2018-08-05 15:58:52.862700 7f16b98c7e00  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
2018-08-05 15:58:52.862701 7f16b98c7e00  4 rocksdb:                Options.disable_auto_compactions: 0
2018-08-05 15:58:52.862702 7f16b98c7e00  4 rocksdb:                         Options.compaction_style: kCompactionStyleLevel
2018-08-05 15:58:52.862703 7f16b98c7e00  4 rocksdb:                           Options.compaction_pri: kByCompensatedSize
2018-08-05 15:58:52.862704 7f16b98c7e00  4 rocksdb:  Options.compaction_options_universal.size_ratio: 1
2018-08-05 15:58:52.862704 7f16b98c7e00  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
2018-08-05 15:58:52.862705 7f16b98c7e00  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
2018-08-05 15:58:52.862706 7f16b98c7e00  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
2018-08-05 15:58:52.862706 7f16b98c7e00  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
2018-08-05 15:58:52.862707 7f16b98c7e00  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
2018-08-05 15:58:52.862708 7f16b98c7e00  4 rocksdb:                   Options.table_properties_collectors: 
2018-08-05 15:58:52.862708 7f16b98c7e00  4 rocksdb:                   Options.inplace_update_support: 0
2018-08-05 15:58:52.862709 7f16b98c7e00  4 rocksdb:                 Options.inplace_update_num_locks: 10000
2018-08-05 15:58:52.862710 7f16b98c7e00  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
2018-08-05 15:58:52.862711 7f16b98c7e00  4 rocksdb:   Options.memtable_huge_page_size: 0
2018-08-05 15:58:52.862712 7f16b98c7e00  4 rocksdb:                           Options.bloom_locality: 0
2018-08-05 15:58:52.862712 7f16b98c7e00  4 rocksdb:                    Options.max_successive_merges: 0
2018-08-05 15:58:52.862713 7f16b98c7e00  4 rocksdb:                Options.optimize_filters_for_hits: 0
2018-08-05 15:58:52.862714 7f16b98c7e00  4 rocksdb:                Options.paranoid_file_checks: 0
2018-08-05 15:58:52.862714 7f16b98c7e00  4 rocksdb:                Options.force_consistency_checks: 0
2018-08-05 15:58:52.862715 7f16b98c7e00  4 rocksdb:                Options.report_bg_io_stats: 0
2018-08-05 15:58:52.864227 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/version_set.cc:2859] Recovered from manifest file:/var/lib/ceph/osd/ceph-0/current/omap/MANIFEST-000011 succeeded,manifest_file_number is 11, next_file_number is 13, last_sequence is 5, log_number is 0,prev_log_number is 0,max_column_family is 0

2018-08-05 15:58:52.864238 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/version_set.cc:2867] Column family [default] (ID 0), log number is 10

2018-08-05 15:58:52.864287 7f16b98c7e00  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1533484732864284, "job": 1, "event": "recovery_started", "log_files": [12]}
2018-08-05 15:58:52.864295 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/db_impl_open.cc:482] Recovering log #12 mode 2
2018-08-05 15:58:52.867540 7f16b98c7e00  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1533484732867529, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 1002, "table_properties": {"data_size": 52, "index_size": 27, "filter_size": 18, "raw_key_size": 20, "raw_average_key_size": 20, "raw_value_size": 16, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 1, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
2018-08-05 15:58:52.867567 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/version_set.cc:2395] Creating manifest 14

2018-08-05 15:58:52.868886 7f16b98c7e00  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1533484732868883, "job": 1, "event": "recovery_finished"}
2018-08-05 15:58:52.871491 7f16b98c7e00  4 rocksdb: [/build/ceph-12.2.3/src/rocksdb/db/db_impl_open.cc:1063] DB pointer 0x562bfd334000
2018-08-05 15:58:52.871621 7f16b98c7e00  0 filestore(/var/lib/ceph/osd/ceph-0) mount(1757): enabling WRITEAHEAD journal mode: checkpoint is not enabled
2018-08-05 15:58:52.872122 7f16b98c7e00  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 31: 5368709120 bytes, block size 4096 bytes, directio = 1, aio = 1
2018-08-05 15:58:52.872510 7f16b98c7e00 -1 journal do_read_entry(61440): bad header magic
2018-08-05 15:58:52.872523 7f16b98c7e00 -1 journal do_read_entry(61440): bad header magic
2018-08-05 15:58:52.872588 7f16b98c7e00  1 journal _open /var/lib/ceph/osd/ceph-0/journal fd 31: 5368709120 bytes, block size 4096 bytes, directio = 1, aio = 1
2018-08-05 15:58:52.872884 7f16b98c7e00  1 filestore(/var/lib/ceph/osd/ceph-0) upgrade(1364)
2018-08-05 15:58:52.873590 7f16b98c7e00  0 _get_class not permitted to load lua
2018-08-05 15:58:52.874656 7f16b98c7e00  0 <cls> /build/ceph-12.2.3/src/cls/hello/cls_hello.cc:296: loading cls_hello
2018-08-05 15:58:52.874673 7f16b98c7e00  0 _get_class not permitted to load sdk
2018-08-05 15:58:52.875696 7f16b98c7e00  0 _get_class not permitted to load kvs
2018-08-05 15:58:52.876510 7f16b98c7e00  0 <cls> /build/ceph-12.2.3/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs
2018-08-05 15:58:52.876811 7f16b98c7e00  0 osd.0 13 crush map has features 288514050185494528, adjusting msgr requires for clients
2018-08-05 15:58:52.876822 7f16b98c7e00  0 osd.0 13 crush map has features 288514050185494528 was 8705, adjusting msgr requires for mons
2018-08-05 15:58:52.876825 7f16b98c7e00  0 osd.0 13 crush map has features 1009089990564790272, adjusting msgr requires for osds
2018-08-05 15:58:52.876869 7f16b98c7e00  0 osd.0 13 load_pgs
2018-08-05 15:58:52.876883 7f16b98c7e00  0 osd.0 13 load_pgs opened 0 pgs
2018-08-05 15:58:52.876888 7f16b98c7e00  0 osd.0 13 using weightedpriority op queue with priority op cut off at 64.
2018-08-05 15:58:52.877571 7f16b98c7e00 -1 osd.0 13 log_to_monitors {default=true}
2018-08-05 15:58:52.882176 7f16b98c7e00  0 osd.0 13 done with init, starting boot process
2018-08-05 15:58:52.882202 7f16b98c7e00  1 osd.0 13 start_boot
2018-08-05 15:58:53.860163 7f16a9021700  1 osd.0 16 state: booting -> active
2018-08-05 15:58:55.680184 7f16b4a28700  0 -- 192.168.1.1:6801/11007 >> 192.168.0.1:6802/7742 conn(0x562bfd847000 :6801 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
2018-08-05 15:58:55.936665 7f16b4a28700  0 -- 192.168.1.1:6801/11007 >> 192.168.2.1:6801/24346 conn(0x562bfd85d800 :6801 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 0 vs existing csq=0 existing_state=STATE_CONNECTING
root@px-chrisp2:/var/log/ceph# command terminated with exit code 137

@erichorwitz
Copy link

I know this is over 8 months old but did you get this working? I am running into a similar issue....

@brian-maloney
Copy link

Same problem here, has anyone resolved this?

@flotho
Copy link

flotho commented Nov 1, 2021

same here, any idea ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants