Deploying a replicated NAS solution using Red Hat Storage Server 2.0

So at the moment, we just have two separate servers which a local disk mount of an XFS file system. Now lets join them up so they act as replica’s of each other.

 

On your first node, the first thing we need to do is probe the other node. This is where your DNS resolution will really be verified.

Below, you can see that I have successfully probed gluster02.example.com from gluster01.example.com

[root@gluster01 ~]# gluster peer probe gluster02.example.com
Probe successful
[root@gluster01 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster02.example.com
Uuid: 3316dbca-6e65-4550-8bb5-3fb97614785c
State: Peer in Cluster (Connected)
[root@gluster01 ~]#

Lets verify the probe from gluster02.example.com just to be sure.

We can see that the probe has set up gluster01.example.com’s IP address in the probe list.

[root@gluster02 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.0.1.41
Uuid: 212c7454-bfcd-4ed0-961f-567104b9c4be
State: Peer in Cluster (Connected)
[root@gluster02 ~]#

When it comes to clusters of any form, I really don’t like inconsistency. It makes it difficult to troubleshoot later on. Lets correct that probe, so that both hosts are using the FQDN.
You do this by repeating the above.

[root@gluster02 ~]# gluster peer probe gluster01.example.com
Probe successful
[root@gluster02 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster01.example.com
Uuid: 212c7454-bfcd-4ed0-961f-567104b9c4be
State: Peer in Cluster (Connected)
[root@gluster02 ~]#

All set. Now lets create our GlusterĀ  replica volume.

From only one of your nodes (it doesn’t matter which), tell gluster to create your volume

[root@gluster01 ~]# gluster volume create MYDATA replica 2 transport tcp gluster01.example.com:/mydata gluster02.example.com:/mydata
Creation of volume MYDATA has been successful. Please start the volume to access data.
[root@gluster01 ~]#

In the above command, my Gluster volume is called MYDATA. I have used capitals for a reason which I will explain later, but this is not compulsory.
‘replica 2’ means that the gluster volume should replicate between 2 bricks which is specified on two different nodes. I have also set the transport communication to act over TCP.

Now lets start the volume and verify its status

[root@gluster01 ~]# gluster volume start MYDATA
Starting volume MYDATA has been successful
[root@gluster01 ~]# gluster volume info MYDATA

Volume Name: MYDATA
Type: Replicate
Volume ID: 3466ad6e-34ef-4bf0-92cd-09522095e233
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.example.com:/mydata
Brick2: gluster02.example.com:/mydata
[root@gluster01 ~]#

Our replicated volume is now set up and online.

Please note: This volume will be accessible via the gluster client which is covered below, however is also automatically available via CIFS and NFS as well should you chose to use this method.

One thing that is very important to keep in mind, and this did leave me scratching my head when i first tried this. If you write any data on one of your nodes locally to your volume, it will not be replicated to the other node. All disk activity should go through a gluster client mount point. This ensures that all disk reads/writes go via the glusterd service. It is the service which is performing the replication here.

This is in the official Gluster documentation as well, so it is an important thing to remember.

 

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>