Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-osd fails to initialize when using non-bluestore and journals #77

Open
spatel-cog opened this issue Mar 5, 2019 · 0 comments
Open

Comments

@spatel-cog
Copy link

Is this a request for help?:NO


Is this a BUG REPORT or FEATURE REQUEST? (choose one):BUG REPORT

Version of Helm and Kubernetes:Kubernetes 1.11, helm: 2.11

Which chart:ceph-helm

What happened:
tried to use non-bluestore configuration with separate journals.

What you expected to happen:
OSD should activate.

How to reproduce it (as minimally and precisely as possible):
Simple two drive system. Set up values.yaml to use non-bluestore object storage.

Anything else we need to know:
Attaching potential "fix" for ceph/ceph/templates/bin/_osd_disk_activate.sh.tpl

#!/bin/bash
set -ex

function osd_activate {
if [[ -z "${OSD_DEVICE}" ]];then
log "ERROR- You must provide a device to build your OSD ie: /dev/sdb"
exit 1
fi

CEPH_DISK_OPTIONS=""
CEPH_OSD_OPTIONS=""

DATA_UUID=$(blkid -o value -s PARTUUID ${OSD_DEVICE}1)
LOCKBOX_UUID=$(blkid -o value -s PARTUUID ${OSD_DEVICE}3 || true)
JOURNAL_PART=$(dev_part ${OSD_DEVICE} 2)
ACTUAL_OSD_DEVICE=$(readlink -f ${OSD_DEVICE}) # resolve /dev/disk/by-
names

watch the udev event queue, and exit if all current events are handled

udevadm settle --timeout=600

wait till partition exists then activate it

if [[ -n "${OSD_JOURNAL}" ]]; then
#wait_for_file /dev/disk/by-partuuid/${OSD_JOURNAL_UUID}
#chown ceph. /dev/disk/by-partuuid/${OSD_JOURNAL_UUID}
#CEPH_OSD_OPTIONS="${CEPH_OSD_OPTIONS} --osd-journal /dev/disk/by-partuuid/${OSD_JOURNAL_UUID}"
CEPH_OSD_OPTIONS="${CEPH_OSD_OPTIONS}"
else
wait_for_file $(dev_part ${OSD_DEVICE} 1)
chown ceph. $JOURNAL_PART
fi

chown ceph. /var/log/ceph

DATA_PART=$(dev_part ${OSD_DEVICE} 1)
MOUNTED_PART=${DATA_PART}

if [[ ${OSD_DMCRYPT} -eq 1 ]]; then
echo "Mounting LOCKBOX directory"
# NOTE(leseb): adding || true so when this bug will be fixed the entrypoint will not fail
# Ceph bug tracker: http://tracker.ceph.com/issues/18945
mkdir -p /var/lib/ceph/osd-lockbox/${DATA_UUID}
mount /dev/disk/by-partuuid/${LOCKBOX_UUID} /var/lib/ceph/osd-lockbox/${DATA_UUID} || true
CEPH_DISK_OPTIONS="$CEPH_DISK_OPTIONS --dmcrypt"
MOUNTED_PART="/dev/mapper/${DATA_UUID}"
fi

ceph-disk -v --setuser ceph --setgroup disk activate ${CEPH_DISK_OPTIONS} --no-start-daemon ${DATA_PART}

OSD_ID=$(grep "${MOUNTED_PART}" /proc/mounts | awk '{print $2}' | grep -oh '[0-9]*')
OSD_PATH=$(get_osd_path $OSD_ID)
OSD_KEYRING="$OSD_PATH/keyring"
OSD_WEIGHT=$(df -P -k $OSD_PATH | tail -1 | awk '{ d= $2/1073741824 ; r = sprintf("%.2f", d); print r }')
ceph ${CLI_OPTS} --name=osd.${OSD_ID} --keyring=$OSD_KEYRING osd crush create-or-move -- ${OSD_ID} ${OSD_WEIGHT} ${CRUSH_LOCATION}

log "SUCCESS"
exec /usr/bin/ceph-osd ${CLI_OPTS} ${CEPH_OSD_OPTIONS} -f -i ${OSD_ID} --setuser ceph --setgroup disk
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant