flash cache tier下放flush实验
创建3台vm
分别是ceph01、ceph02、ceph03
a)后台手动部署ceph
b)后台部署完毕后创建一个HDD池
rados mkpool HDD
然后定制crushmap
1.ceph osd getcrushmap -o crushmap 获取当前crushmap
2.crushtool -d crushmap -o crushmap.txt 将二进制的crushmap转换成文本格式便于修改
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
host ceph01 {
id -2 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.000
item osd.1 weight 0.000
}
host ceph02 {
id -3 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.2 weight 0.000
item osd.3 weight 0.000
}
host ceph03 {
id -4 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.4 weight 0.000
item osd.5 weight 0.000
}
root default {
id -1 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item ceph01 weight 0.000
item ceph02 weight 0.000
item ceph03 weight 0.000
}
root flash {
id -5 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.000
item osd.2 weight 0.000
item osd.4 weight 0.000
}
root hdd {
id -6 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
item osd.1 weight 0.000
item osd.3 weight 0.000
item osd.5 weight 0.000
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule hdd_ruleset {
ruleset 1
type replicated
min_size 1
max_size 10
step take hdd
step chooseleaf firstn 0 type osd
step emit
}
rule flash_ruleset {
ruleset 2
type replicated
min_size 1
max_size 10
step take flash
step chooseleaf firstn 0 type osd
step emit
}
# end crush map
3.crushtool -c crushmap.txt -o ncrushmap 将新的crushmap反编译回二进制
4.ceph osd setcrushmap -i ncrushmap 将新的映射到集群中
(ceph osd pool set hdd crush_ruleset 1
ceph osd pool set sad crush_ruleset 2)将生成的新rules使能到pool中
5.添加cachetier之前测试一下空间可用
echo 'test' | rados -p HDD put test_object - 添加一个对象
rados -p HDD get test_object - 查看该对象的内容
rados df
rados -p HDD rm test_object 删除该测试对象
6.ceph osd tier add HDD flash 将flash pool添加到HDD pool上的cache tier
ceph osd tier cache-mode flash writeback 设置flashpool的cache模式
ceph osd tier set-overlay HDD flash
获取参数值的命令:ceph osd pool get {cachepool} {key}
设置参数命令ceph osd pool set {cachepool} {key} {value}
74 ceph osd pool get flash hit_set_type
75 ceph osd pool set flash hit_set_type bloom
76 ceph osd pool set flash hit_set_count 0
77 ceph osd pool set flash hit_set_period 3600
78 ceph osd pool set flash target_max_bytes 10000000000
79 ceph osd pool set flash cache_target_dirty_ratio 0.4
80 ceph osd pool set flash cache_target_full_ratio 0.8
81 ceph osd pool set flash target_max_objects 1000000
82 ceph osd pool set flash cache_min_flush_age 600
83 ceph osd pool set flash cache_min_evict_age 1200
7.找一台客户端机挂载一个文件系统
在挂载点下写一个shell脚本循环写入100个文件
此时rados df
能看到flash的pool中写入了很多对象
8.将flash pool的数据刷到hdd上 (操作之前将/wyz挂载点umont掉)
rados -p flash cache-flush-evict-all
刷结束后再mount上,发现数据完好都在