Gluster geo-replication and aux mounts

 

Lets begin by understanding aux gfid mounts:






What is Gfid-access Translator?


A translator is conceptually a component which can be stacked one over the other.  
The processed output of one translator goes as input to the next translator.

In a way it translates itself to the next translator. Any number or translators can be added or removed in this stack to achieve the desired goals.

Translators are implimented in glusterfs to convert requests from users to requests for storage.


Learn more about Translators here 

Gfid-access translator is one such translator which was designed to provide direct access to files in glusterfs backend, with the help of a special mount called as aux gfid mount.

An aux gfid mount is created by simply mounting glusterfs volume with aux-gfid-mount option:

Example:
#mount -t glusterfs -o aux-gfid-mount <ip>:<primary vol> /primary-aux-mnt

On creation of an aux-gfid-mount, a virtual directory called .gfid is exposed from the aux-gfid-mount. This directory contains gfids of all files in the volume

You can stat, cat or do any related operations on these gfids.


Example:

Here you are dealing Particularly with /mountpoint/.gfid/<actual-canonical-gfid-of-the-file>

#cat /master-aux-mnt/.gfid/796d3170-0910-4853-9ff3-3ee6b1132080 sample data #stat /master-aux-mnt/.gfid/796d3170-0910-4853-9ff3-3ee6b1132080 File: `.gfid/796d3170-0910-4853-9ff3-3ee6b1132080' Size: 12 Blocks: 1 IO Block: 131072 regular file Device: 13h/19d Inode: 11525625031905452160 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-05-23 20:43:33.239999863 +0530 Modify: 2014-05-23 17:36:48.224999989 +0530 Change: 2014-05-23 20:44:10.081999938 +0530



Geo-rep and aux gfid mounts:

Geo-replication mounts primary and secondary volumes on /tmp/gsyncd-aux-mount mount points.

Lets verify that on a actual gluster geo-rep setup:

[root@localhost glusterfs]# gluster volume geo-replication status
 
PRIMARY NODE     PRIMARY VOL    PRIMARY BRICK           SECONDARY USER    SECONDARY                         SECONDARY NODE    STATUS     CRAWL STATUS       LAST_SYNCED                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.0.130    primary        /d/backends/primary1    root              ssh://192.168.0.130::secondary                      Active     Changelog Crawl    2022-02-10 13:53:14          
192.168.0.130    primary        /d/backends/primary2    root              ssh://192.168.0.130::secondary                      Passive    N/A                N/A                          
192.168.0.130    primary        /d/backends/primary3    root              ssh://192.168.0.130::secondary                      Active     Changelog Crawl    2022-02-10 13:53:14          
192.168.0.130    primary        /d/backends/primary4    root              ssh://192.168.0.130::secondary                      Passive    N/A                N/A      
                   

There will be an aux-gfid-mount associated with each gsyncd worker process.

As and when worker crashes, geo-rep will cleanup these mountpoints and exits.

on primary:

# ps -aux | grep gsyncd-aux-mount
[root      129822  0.0  0.1 799280 16236 ?        Ssl  13:52   0:00 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication/primary_192.168.0.130_secondary/mnt-d-backends-primary1.log --volfile-server=localhost --volfile-id=primary --client-pid=-1 /tmp/gsyncd-aux-mount-72m_3frc
root      129833  0.0  0.1 799388 19400 ?        Ssl  13:52   0:00 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication/primary_192.168.0.130_secondary/mnt-d-backends-primary3.log --volfile-server=localhost --volfile-id=primary --client-pid=-1 /tmp/gsyncd-aux-mount-ylnv2rm3
root      129872  0.0  0.1 799280 18732 ?        Ssl  13:52   0:03 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication/primary_192.168.0.130_secondary/mnt-d-backends-primary2.log --volfile-server=localhost --volfile-id=primary --client-pid=-1 /tmp/gsyncd-aux-mount-hgeocghl
root      129909  0.0  0.0 799280 16004 ?        Ssl  13:53   0:03 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication/primary_192.168.0.130_secondary/mnt-d-backends-primary4.log --volfile-server=localhost --volfile-id=primary --client-pid=-1 /tmp/gsyncd-aux-mount-a5fg3x6f


on secondary:

# ps -aux | grep gsyncd-aux-mount
[root@localhost glusterfs]# ps -aux | grep gsyncd-aux-mount
root      129591  0.0  0.1 799384 17856 ?        Ssl  13:52   0:01 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication-secondaries/primary_192.168.0.130_secondary/mnt-192.168.0.130-d-backends-primary1.log --volfile-server=localhost --volfile-id=secondary --client-pid=-1 /tmp/gsyncd-aux-mount-heduhzj6
root      129674  0.0  0.1 873116 16956 ?        Ssl  13:52   0:01 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication-secondaries/primary_192.168.0.130_secondary/mnt-192.168.0.130-d-backends-primary3.log --volfile-server=localhost --volfile-id=secondary --client-pid=-1 /tmp/gsyncd-aux-mount-_e8o318a
root      129713  0.0  0.1 799384 19064 ?        Ssl  13:52   0:01 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication-secondaries/primary_192.168.0.130_secondary/mnt-192.168.0.130-d-backends-primary2.log --volfile-server=localhost --volfile-id=secondary --client-pid=-1 /tmp/gsyncd-aux-mount-4v7p_o5b
root      129783  0.0  0.1 873116 19784 ?        Ssl  13:52   0:01 /usr/local/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO --log-file=/var/log/glusterfs/geo-replication-secondaries/primary_192.168.0.130_secondary/mnt-192.168.0.130-d-backends-primary4.log --volfile-server=localhost --volfile-id=secondary --client-pid=-1 /tmp/gsyncd-aux-mount-ezlovc3o

You can refer to /var/log/glusterfs/geo-replication/<primary vol>_<secondary ip>_<secondary vol>/mnt-<brick name>.log on primary volume and
/var/log/glusterfs/geo-replication-secondaries/<primary vol>_<secondary ip>_<secondary vol>/mnt-<brick name>.log on secondary volume to get more information if something goes wrong with these
aux-mounts.

Changelogs in geo-rep:

The changelog translator logs the 'gfid' along with corresponding file operation that is taking place on that gfid inside the changelog files in the brick backend, i.e inside <brick_path>/.glusterfs/changelogs/ directory on each brick

Geo-rep then consumes these changelogs with the help of  gfid-access translator and syncs all the changes in primary volume to  secondary volume.


Comments

Popular Posts