Shared Storage in Gluster Geo-replication
Lets check where a shared volume is found on a actual geo-rep setup:
[root@localhost glusterfs]# gluster volume geo-replication status
PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.0.130 primary /d/backends/primary1 root ssh://192.168.0.130::secondary Active Changelog Crawl 2022-02-10 13:53:14
192.168.0.130 primary /d/backends/primary2 root ssh://192.168.0.130::secondary Passive N/A N/A
192.168.0.130 primary /d/backends/primary3 root ssh://192.168.0.130::secondary Active Changelog Crawl 2022-02-10 13:53:14
192.168.0.130 primary /d/backends/primary4 root ssh://192.168.0.130::secondary Passive N/A N/A
On primary primary volume, when we do gluster volume info we can see an additional volume named gluster_shared_storage along with primary and secondary volumes:
[root@localhost glusterfs]# gluster volume info
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: c8231c46-a5da-4084-a0e7-49b931c5d4ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.129:/d/backends/gluster_shared_storage1
Brick2: 192.168.0.129:/d/backends/gluster_shared_storage2
Brick3: 192.168.0.129:/d/backends/gluster_shared_storage3
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: primary
Type: Distributed-Replicate
Volume ID: 37abe15b-7c00-4ef0-8620-fb359cac08ce
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.0.129:/d/backends/primary1
Brick2: 192.168.0.129:/d/backends/primary2
Brick3: 192.168.0.129:/d/backends/primary3
Brick4: 192.168.0.129:/d/backends/primary4
Options Reconfigured:
changelog.changelog: on
- This shared storage volume is created in gluster geo-rep setup when you issue the following command during the geo-rep setup phase:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true
- We should note that, with geo-replication, each brick will have a worker associated with it.
- A volume can have replica bricks, i.e, set of bricks having the same data (this is designed for high availability, i.e, even if any of the replica brick goes down, data can be accessed from the other replica bricks)
- If all the workers from all the replica bricks sync data, there will be redundant/ repetitive work. To avoid this scenario, one worker is made active and it participates in sysncing the data. Other replica brick workers remain passive. One of these passive workers can become active only if the current active worker dies for some reason.
- Geo-rep uses shared volume/meta volume to achieve the above goal.
- Each worker from the same replica set, tries to acquire the lock on a file inside meta volume or the shared volume. If any worker acquires the lock, then it becoms the active worker, while rest of the workers remain passive.
Comments
Post a Comment