forked from OpenMPDK/dss-sdk
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME_INSTALL.txt
194 lines (155 loc) · 8.6 KB
/
README_INSTALL.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
Here is the steps to install nkv stack and test it out.
Supported OS and Kernel: (Local NKV )
-----------------------
CentOS Linux release 7.4.1708 (Core)
3.10.0-693.el7.x86_64
Suppoerted OS and Kernel: ( Remote NKV )
-----------------------
CentOS Linux release 7.4.1708 (Core)
5.1.0
Supported system configuration:
------------------------------
Memory :
------
If cache based listing is turned on (default), NKV need a system with at least 256G memory
depending on the number of local drives it has.
For remote NKV, memory requirement on the NKV client side will be less as indexing will be
on the target side.
CPU:
---
We benchmarked with the following cpu config:
Model : 2 x Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GH
Physical core: 22 physical core per cpu socket
Supported Minio release:
-----------------------
minio_nkv_jul02.1
Dependency:
----------
yum install boost-devel
yum install jemalloc-devel
yum install libcurl-devel
Unzip nkv-sdk-bin-*.tgz and it will create a folder named 'nkv-sdk' say ~/nkv-sdk.
Build open_mpdk driver: ( Local NKV )
----------------------
1. cd nkv-sdk/openmpdk_driver/kernel_v3.10.0-693-centos-7_4/
2. make clean
3. make all
4. ./re_insmod.sh //It may take some seconds
5. It should unload stock nvme driver and load open_mpdk one.
6. Run the following command to see if you are getting similar output to make sure driver loaded properly
[root@msl-ssg-sk01]# lsmod | grep nvme
nvme 62347 0
nvme_core 50009 0
Build and istall open_mpdk driver: ( Remote NKV )
---------------------
1. cd nkv-sdk/openmpdk_driver/kernel_v5.1_nvmf
2. Follow the instruction provided in the README.
Run open_mpdk test cli:
---------------------
Run the open_mpdk test cli to make sure nvme KV driver is working fine.
1. Run "nvme list" command to identify the Samsung KV devices, let's say it is mounted on /dev/nvme0n1
2. "cd ~/nkv-sdk/bin" and run the following command and check if similar output is coming or not in your setup.
//PUT
[root@msl-ssg-sk01 bin]# ./sample_code_sync -d /dev/nvme0n1 -n 10 -o 1 -k 16 -v 4096
ENTER: open
EXIT : open
Total time 0.00 sec; Throughput 8883.03 ops/sec
KV device is closed: fd 3
//GET
[root@msl-ssg-sk01 bin]# ./sample_code_sync -d /dev/nvme0n1 -n 10 -o 2 -k 16 -v 4096
ENTER: open
EXIT : open
Total time 0.00 sec; Throughput 9268.52 ops/sec
KV device is closed: fd 3
Run NKV test cli:
----------------
All good so far, let's attempt nkv test cli to see if nkv stack is working fine or not
1. export LD_LIBRARY_PATH=~/nkv-sdk/lib
2. vim ../conf/nkv_config.json
3. nkv_config.json is the config file for NKV. It has broadly 2 sections for now, global, "nkv_local_mounts" .
NKV api doc has the detailed explanation of all the fields. For local KV devices user only needs to change
the "mount_point" field under "nkv_local_mounts" to run with defaults.
Provide the dev path (/dev/mvme*) from nvme list command like we use for running "sample_code_sync" above.
Example config file has four mount points defined and thus four dev path, "/dev/nvme14n1" .. "/dev/nvme17n1"
If we need to add/remove devices, we need to add/remove section under 'nkv_local_mounts'.
4. Create log folder "/var/log/dragonfly/" if doesn't exists. This is default log location. Default log level is WARN.
Log config options can be changed from bin/smglogger.properties.
5. Run "./nkv_test_cli" command to find the usage information.
6. Local NKV execution:
./nkv_test_cli -c ../conf/nkv_config.json -i msl-ssg-dl04 -p 1030 -b meta/prefix1/nkv -r / -k 128 -v 4096 -o 3 -n 100
Remote NKV execution:
./nkv_test_cli -c ../conf/nkv_config_remote.json -i msl-ssg-dl04 -p 1030 -b meta/prefix1/nkv -r / -k 128 -v 4096 -o 3 -n 100
7. This command should generate output on console as well as /var/log/dragonfly/nkv.log
8. On successful run, it should generate output similar to the following. For more verbose output if INFO logging is enabled
2019-04-23 17:10:11,103 [140136478451584] [ALERT] contact_fm = 0, nkv_transport = 0, min_container_required = 1, min_container_path_required = 1, container_path_qd = 16384
2019-04-23 17:10:11,103 [140136478451584] [ALERT] core_pinning_required = 0, app_thread_core = -1, nkv_queue_depth_monitor_required = 0, nkv_queue_depth_threshold_per_path = 0
2019-04-23 17:10:11,103 [140136478451584] [ALERT] Adding device path, mount point = /dev/nvme14n1, address = 101.100.10.31, port = 1023, nqn name = nqn-02, target node = msl-ssg-sk01, numa = 0, core = -1
2019-04-23 17:10:11,103 [140136478451584] [ALERT] Adding device path, mount point = /dev/nvme15n1, address = 102.100.10.31, port = 1023, nqn name = nqn-02, target node = msl-ssg-sk01, numa = 1, core = -1
2019-04-23 17:10:11,103 [140136478451584] [ALERT] Adding device path, mount point = /dev/nvme16n1, address = 103.100.10.31, port = 1023, nqn name = nqn-02, target node = msl-ssg-sk01, numa = 1, core = -1
2019-04-23 17:10:11,103 [140136478451584] [ALERT] Adding device path, mount point = /dev/nvme17n1, address = 104.100.10.31, port = 1023, nqn name = nqn-02, target node = msl-ssg-sk01, numa = 1, core = -1
2019-04-23 17:10:11,103 [140136478451584] [ALERT] Max QD per path = 4096
ENTER: open
EXIT : open
ENTER: open
EXIT : open
ENTER: open
EXIT : open
ENTER: open
EXIT : open
2019-04-23 17:10:11,144 [140136478451584] [ALERT] TPS = 6910, Throughput = 26 MB/sec, value_size = 4096, total_num_objs = 100
KV device is closed: fd 3
KV device is closed: fd 6
KV device is closed: fd 8
KV device is closed: fd 10
9. For more verbose output, enable INFO logging in bin/smglogger.properties by editing the following.
log4cpp.category.libnkv=WARN, nkvAppender_rolling
10. To get list of keys, run the following command
./nkv_test_cli -c ../conf/nkv_config.json -i msl-ssg-dl04 -p 1030 -b meta/minio -r / -k 128 -o 4 -n 10000
-b <prefix> - will filter on the key prefix
-k <key-length> - Key size
-o <op-type> - 4 is for listing
-n <num_ios> - max number of keys at one shot
Building app on top of NKV:
--------------------------
1. Header files required to build the app is present in ~/nkv-sdk/include folder
2. NKV library and other dependent libraries are present in ~/nkv-sdk/lib
3. nkv_test_cli code is provided as reference under ~/nkv-sdk/src/test folder
4. Supported api so far , nkv_open, nkv_close, nkv_physical_container_list, nkv_malloc, nkv_zalloc, nkv_free, nkv_store_kvp, nkv_retrieve_kvp, nkv_delete_kvp, nkv_indexing_list_keys
Running MINIO app :
------------------
Put the following in a script may be..Need at least 4 devices to run Minio..
1. export LD_LIBRARY_PATH=~/nkv-sdk/lib
2. export MINIO_NKV_CONFIG=~/nkv-sdk/conf/nkv_config.json
3. export MINIO_ACCESS_KEY=minio
4. export MINIO_SECRET_KEY=minio123
5. export MINIO_NKV_MAX_VALUE_SIZE=2097152
6. export MINIO_NKV_SYNC=1
7. ulimit -n 65535
8. ulimit -c unlimited
9. cd ~/nkv-sdk/bin
10. ./<minio-binary> server /dev/nvme{0...3}n1
11. start.sh script is included under bin directory that has all the above commands incorporated.
Change the minio binary, config file and LD_LIBRARY_PATH accordingly.
12. Run Minio with local KV drives.Mount points at the host should be matching to the mount points
given to nkv_config.json under 'nkv_local_mounts'
./start.sh
13. Run Minio with remote KV drives. Make sure remote mount paths are connected
./start.sh remote
14. Run Distributed Minio with pre existing remote mount points.
./<minio-binary> server http://minio{1...4}/dev/nvme{0...3}n1
Where, minio1, to minio4 are 4 minio node names mentioned in /etc/hosts file of each server.
/dev/nvme{0...3}n1 are 4 remote KV drives.
./start.sh remote dist
15. Running Minio with complete DSS stack which includes target software, host library and UnifiedFabricManager.
- Make sure UFM is up and update NKV remote configuration file ( conf/nkv_remote_config.json)
16. For more detailed documentation on how to run Minio with KV stack can be found in Minio web site.
Running MINIO app with emulator :
-------------------------------
1. Modify config file path within bin/start.sh to use nkv_config_emul.json
2. Download minio_nkv_jun27.1 (or latest minio) binary under bin/
3. Uncomment LD_PRELOAD=/lib64/libjemalloc.so.1 ./minio_nkv_jul24 server /dev/kvemul{1...4}
4. Comment LD_PRELOAD=/lib64/libjemalloc.so.1 ./minio_nkv_jul24 server /dev/nvme{0...5}n1
5. ./start.sh
6. Run S3bench pointing to this server like this
./s3-benchmark -a minio -b default -s minio123 -u http://<ip>:9000 -t 2 -z 64M -d 10
7. Emulator doesn't support lot of threads and biger size IO because of memory limitation