HowTo setup GFS2 with Clustering
HowTo setup GFS2 with Clustering
http://www.linuxdynasty.org/howto-setup-gfs2-with-clustering.html
In my last project at work, I had to replace NFS with GFS2 and Clustering. So in this tutorial I will
show you how to create a Red Hat or CentOS cluster with GFS2. I will also show you how to
optimize GFS2 performance in the next HowTo, because you will quickly notice some loss of
performance until you do a little optimization first.I will 1st show you how do build a Cluster with
GFS2 on the Command Line and in the next tutorial I will show you how to do the same thing using
Conga.
In this tutorial I am using 3 CentOS Virtual Machines running CentOS 5.3 in VMware ESX 3.5. For
the GFS2 File System I am using a vmdk built with the thick option, that is shared among all the
Virtual Machines. You also can use iscsi or fiber... This option is up to you.
gfs3 == 192.168.101.103
Since I'm using VMware ESX for the 3 machines above I will also be using vmware for fencing. The
info is below for my test setup
The 1st command you need to know for creating and modifying your cluster is the 'ccs_tool'
command.
Below I will show you the necessary steps to create a cluster and then the GFS2 filesystem
3. Now that the cluster is created, we will now need to add the fencing devices.
( For simplicity you can just use fence_manual for each host.. ccs_tool addfence -C
gfs1_ipmi fence_manual )
But if you are using VMware ESX like I am you should use fence_vmware like so...
ccs_tool addfence -C gfs1_vmware fence_vmware ipaddr=esxtest login=esxuser
passwd=esxpass vmlogin=root vmpasswd=esxpass port="/vmfs/volumes/49086551-
c64fd83c-0401-001e0bcd6848/eagle1/gfs1.vmx"
ccs_tool addfence -C gfs2_vmware fence_vmware ipaddr=esxtest login=esxuser
passwd=esxpass vmlogin=root vmpasswd=esxpass port="vmfs/volumes/49086551-
c64fd83c-0401-001e0bcd6848/gfs2/gfs2.vmx"
ccs_tool addfence -C gfs3_vmware fence_vmware ipaddr=esxtest login=esxuser
passwd=esxpass vmlogin=root vmpasswd=esxpass port="/vmfs/volumes/49086551-
c64fd83c-0401-001e0bcd6848/gfs3/gfs3.vmx"
4. Now that we added the Fencing devices, it is time to add the nodes..
ccs_tool addnode -C gfs1.newschool.edu -n 1 -v 1 -f gfs1_vmware
ccs_tool addnode -C gfs2.newschool.edu -n 2 -v 1 -f gfs2_vmware
ccs_tool addnode -C gfs3.newschool.edu -n 3 -v 1 -f gfs3_vmware
5. Now we need to copy this configuration over to the other 2 nodes from gfs1 or we can run
the exact same commands above on the other 2 nodes..
scp /etc/cluster/cluster.conf root@gfs2:/etc/cluster/cluster.conf
scp /etc/cluster/cluster.conf root@gfs3:/etc/cluster/cluster.conf
6. You can verify the config on all 3 nodes by running the following commands below..
ccs_tool lsnode
ccs_tool lsfence
7. You are ready to proceed with starting up the following daemons on all the nodes in the
cluster, once you either copied over the configs or re ran the same commands above on the
other 2 nodes
/etc/init.d/cman start
/etc/init.d/rgmanager start
8. You can now check the status of your cluster by running the commands below...
clustat
cman_tool status
9. If you want to test the vmware fencing you can do so by doing the following.. ( run the
command below on the 1st node and use the 2nd node as the node to be fenced )
fence_vmware -a esxtest -l esxuser -p esxpass -L root -P esxpass -n
"/vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/gfs2/gfs2.vmx" -v
10. Before we start to create the LVM2 volumes and Proceed to GFS2, we will need to enable
clustering in LVM2.
lvmconf --enable-cluster
13. Once the above has been completed, you will now need to create the GFS2 file system..
Example below..
mkfs -t <filesystem> -p <locking mechanism> -t <ClusterName>:<PhysicalVolumeName> -j
<JournalsNeeded == amount of nodes in cluster> <location of filesystem>
mkfs -t gfs2 -p lock_dlm -t MyCluster:MyTestGFS -j 4 /dev/mapper/mytest_gfs2-
MyGFS2test
14. All we need to do on the 3 nodes, is to mount the GFS2 file system.
mount /dev/mapper/mytest_gfs2-MyGFS2test /mnt/
15. Once you mounted your GFS2 file system You can the following commands..
gfs2_tool list
gfs2_tool df
1. Now that we have a fully functional cluster and a mountable GFS2 file system, we need to
make sure all the necessary daemons start up with the cluster..
chkconfig --level 345 rgmanager on
chkconfig --level 345 clvmd on
chkconfig --level 345 cman on
chkconfig --level 345 gfs2 on
2. If you want the GFS2 file system to be mounted at startup you can add this to /etc/fstab..
echo "/dev/mapper/mytest_gfs2-MyGFS2test /GFS gfs2
defaults,noatime,nodiratime 0 0" >> /etc/fstab
In the next up coming tutorials I will show you how to do the same as above but with the Red Hat
Conga gui and I will also show you how to optimize your GFS2 Cluster setup.
In the last HowTo, I showed you how to setup GFS2 file system with Red Hat Clustering. I will now show you how to
optimize the performance of your GFS2 mounts. The gfs_controld daemon manages the mounting, unmounting, and
the recovery of the GFS2 mounts. gfs_controld also manages the posix lock.
By default the plock_rate_limit option is set to 100. This will allow a maximum of 100 locks per second, which will
decrease your GFS2 performance. See below...
You can test the performance of you cluster by downloading the program ping_pong.c. This program was very
helpful to me in debugging the poor performance in my GFS2 cluster.
The instructions on how to compile the program and run it is on the site
http://wiki.samba.org/index.php/Ping_pong.When I initially ran ping_pong, I only got a max of 97 plocks per second.
After removing the rate limit I was able to get about 3000 Plocks per second.