ceph-mds的standby_replay高速热备状态
时间: 2020-01-03来源:OSCHINA
前景提要
【围观】麒麟芯片遭打压成绝版,华为亿元投入又砸向了哪里?>>>
ceph的MDS是cephFS文件存储服务的元数据服务。
当创建cephfs后便会有ceph-mds服务进行管理。默认情况下ceph会分配一个mds服务管理cephfs,即使已经创建多个mds服务,如下: [root@ceph-admin my-cluster]# ceph-deploy mds create ceph-node01 ceph-node02 ....... [root@ceph-admin ~]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-admin=up:active}, 2 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean
此时mds中只有ceph-admin处于active状态,其余处于standby状态。
standby已近是一种灾备状态,但事实上切换速度比较缓慢,对于实际业务系统必定会造成影响。
测试情况如下: [root@ceph-admin my-cluster]# killall ceph-mds [root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_WARN 1 filesystem is degraded services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-node02=up:rejoin}, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean [root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-node02=up:active}, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean io: client: 20 KiB/s rd, 3 op/s rd, 0 op/s wr
系统在active的mds被kill之后,standby的mds在经过rejoin状态后才变成了active,大约经过3-5s。而在生产环境中由于元数据的数据量更庞大,往往会更漫长。
而要让mds更快切换,需要将我们的mds服务切换至 standby_replay 状态,官方对于此状态的说明如下:
The MDS is following the journal of another up:active MDS. Should the active MDS fail, having a standby MDS in replay mode is desirable as the MDS is replaying the live journal and will more quickly takeover. A downside to having standby replay MDSs is that they are not available to takeover for any other MDS that fails, only the MDS they follow.
事实上就是standby_replay会实时根据active的mds元数据日志进行同步更新,这样就能加快切换的速率,那么如何让mds运行在standby_replay状态? [root@ceph-node01 ~]# ps aufx|grep mds root 700547 0.0 0.0 112704 976 pts/1 S+ 13:45 0:00 \_ grep --color=auto mds ceph 690340 0.0 0.5 451944 22988 ? Ssl 10:09 0:03 /usr/bin/ceph-mds -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph [root@ceph-node01 ~]# killall ceph-mds [root@ceph-node01 ~]# /usr/bin/ceph-mds -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph --hot-standby 0 starting mds.ceph-node01 at -
我们手动关闭了ceph-mds服务并且添加了--hot-standby 参数重新启动了mds服务。
接下来看到,ceph-node02的mds已经处于standby-replay状态: [root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-admin=up:active}, 1 up:standby-replay, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean io: client: 851 B/s rd, 1 op/s rd, 0 op/s wr
当然或者可以ceph-mds的启动脚本来让服务启动时自动处于standby-replay状态

科技资讯:

科技学院:

科技百科:

科技书籍:

网站大全:

软件大全:

热门排行