- dccn-l018
- dccn-c005
- dccn-c036
- mentat004
- dccn-c029 - dccn-c034
- dccn-c350 - dccn-c354
-
create Ceph pools for CephFS, e.g.
cephfs_data
for data, andcephfs_metadata
for metadata, withpg_num
andpgp_num
equal to4
.$ ceph osd pool create cephfs_data 4 $ ceph osd pool create cephfs_metadata 4
-
create new CephFS on top of the pools
$ ceph fs new cephfs cephfs_metadata cephfs_data
-
check the CephFS
$ ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
-
deploy MDS from the management node
$ ceph-deploy mds create dccn-c005 $ ceph-deploy mds create dccn-c036
-
check MDS status
$ ceph mds stat e43: 1/1/1 up {0=dccn-c005=up:active}, 1 up:standby
Just for fun, one could test the failover by manually failing the active MDS (e.g. to fail the one with index 0):
$ ceph mds fail 0 failed mds.0
Now check the MDS status again, you will see another MDS takes over
$ ceph mds stat $ e51: 1/1/1 up {0=dccn-c036=up:active}, 1 up:standby
-
create
ceph
management account on the client nodes$ /opt/cluster/sbin/ceph/02_create_manage_account.sh
Note: follow the instruction to set
sudo
andsshd
for the ceph user. -
distribute SSH public key from the management node to client nodes. The following instructions use
mentat004
as an example, repeat it for all client nodes.-
login to management node with
ceph
account -
copy SSH public key to client nodes
$ ssh-copy-id ceph@mentat004
-
for the convenience, one could make the following configuration in
$HOME/.ssh/config
Host mentat004 StrictHostKeyChecking no UserKnownHostsFile=/dev/null User ceph Host mentat004.dccn.nl StrictHostKeyChecking no UserKnownHostsFile=/dev/null User ceph
-
test for passwordless login to the client node
$ ssh mentat004
The
ceph
user should login tomentat004
without password.
-
-
deploy Ceph packages to client node
$ ceph-deploy install mentat004
-
make a
cephx
user specific for mounting CephFS$ ceph auth get-or-create client.cephfs mds 'allow' mon 'allow *' osd 'allow * pool=cephfs_data, allow * pool=cephfs_metadata' -o ceph.client.cephfs.keyring
The above command adds a cephx user
client.cephfs
, retrieve and store the secret key of the user inceph.client.cephfs.keyring
. -
deploy the secret key of
client.cephfs
to client node$ /mnt/software/cluster/sbin/ceph/05_deploy_cephfs_secret.sh `grep 'key' ceph.client.cephfs.keyring | awk '{print $NF}'` mentat004
-
make mounting point (e.g. /mnt/cephfs) and mount CephFS
$ ssh -tt ceph@mentat004 'sudo mkdir /mnt/cephfs' $ ssh -tt ceph@mentat004 'sudo mount -t ceph dccn-c005:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/cephfs.secret'
where
dccn-c005
is one of the MON (yes, it's MON, not MDS) in the cluster.
-
ACL doesn't seem to be supported in current kernel. According to reports on internet, it will be available in kernel 3.14 onwards ... The ACL it supports is in the POSIX-style.
-
first performance test using
dd
to put data into CephFS, with comparison to local disk.-
Test host:
mentat004
-
Test command
$ dd if=/dev/zero of=/mnt/cephfs/test/zero.$$ bs=1024 count=<N>
where
<N>
is listed in the result table below. -
Test result
size CephFS Local disk (ext4) 1 KB 1 22.7 KB/s 5.6 MB/s 4 KB 4 3.7 MB/s 32.9 MB/s 100 KB 100 33 MB/s 140 MB/s 1 MB 1024 178 MB/s 182 MB/s 4 MB 4096 283 MB/s 196 MB/s 16 MB 16384 345 MB/s 252 MB/s 512 MB 524288 795 MB/s 474 MB/s 1 GB 1048576 837 MB/s 499 MB/s 2 GB 2097152 857 MB/s 500 MB/s 4 GB 4194304 869 MB/s 503 MB/s
-
-
scalability test with torque cluster
-
test command
echo 'dd if=/dev/zero of=/mnt/cephfs/test/zero.$$ bs=1024 count=2097152' | qsub -N 'cephfs_2gb_n2' -t 1-2 -q test -l nodes=dccn-c032.dccn.nl,walltime=00:10:00,mem=3gb
-
results
-