Like us on Facebook!

Saturday, 10 September 2016

How-to guide on creating a Replicated Glusterfs Volume using GlusterFS on Centos7 using Digital Ocean and Block Storage


Second part of the exercise is to create a Replicated Glusterfs Volume to overcome the data loss problem faced in the distributed volume. For detailed architectural overview visit glusterfs page.

Click here to view first part

Replicated Glusterfs Volume

Steps:

  1. Create trusted storage pool
  2. Create mounts using block storage created
  3. Create Replicated Volume
  4. Mount volume on a client
  5. Verification

To make it easy for you to follow I have:
  • highlighted commands in BLUE
  • highlighted expected output in GREEN
  • made server name in bold

Prerequisite:


  1. Basic command on linux commands and remote access to servers
  2. This demo can be followed using local VMs using VirtualBox or VM Fusion etc
  3. Understand of LVM

1. Replicate Volume Setup - create a replicated volume with two storage servers. Once again we'll be assigning block storage to these servers. Add repl1 and repl2 to trusted storage pool. 


[eedevs@gluster1 ~]$ sudo gluster peer probe repl1
peer probe: success. 

[eedevs@gluster1 ~]$ sudo gluster peer probe repl2
peer probe: success

2. Create mounts

2.1 on repl1


[eedevs@gluster1-replica ~]$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-02
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1048576 inodes, 4194304 blocks
209715 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
128 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[eedevs@gluster1-replica ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  16G  0 disk 
vda     253:0    0  20G  0 disk 
├─vda1  253:1    0  20G  0 part /
└─vda15 253:15   0   1M  0 part 

[eedevs@gluster1-replica ~]$ sudo pvcreate /dev/sda
WARNING: ext4 signature detected on /dev/sda at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/sda.
  Physical volume "/dev/sda" successfully created

[eedevs@gluster1-replica ~]$ sudo vgcreate vg_bricks /dev/sda
  Volume group "vg_bricks" successfully created

[eedevs@gluster1-replica ~]$ sudo lvcreate -L 14G -T vg_bricks/brickpool3
  Logical volume "brickpool3" created.

[eedevs@gluster1-replica ~]$ sudo lvcreate -V 3G -T vg_bricks/brickpool3 -n shadow_brick1
  Logical volume "shadow_brick1" created.

[eedevs@gluster1-replica ~]$ sudo mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick1
meta-data=/dev/vg_bricks/shadow_brick1 isize=512    agcount=8, agsize=98288 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=786304, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[eedevs@gluster1-replica ~]$ sudo mkdir -p /bricks/shadow_brick1

[eedevs@gluster1-replica ~]$ sudo mount /dev/vg_bricks/shadow_brick1 /bricks/shadow_brick1/

[eedevs@gluster1-replica ~]$ sudo mkdir /bricks/shadow_brick1/brick

[eedevs@gluster1-replica ~]$ lsblk
NAME                           MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                              8:0    0  16G  0 disk 
├─vg_bricks-brickpool3_tmeta   252:0    0  16M  0 lvm  
│ └─vg_bricks-brickpool3-tpool 252:2    0  14G  0 lvm  
│   ├─vg_bricks-brickpool3     252:3    0  14G  0 lvm  
│   └─vg_bricks-shadow_brick1  252:4    0   3G  0 lvm  /bricks/shadow_brick1
└─vg_bricks-brickpool3_tdata   252:1    0  14G  0 lvm  
  └─vg_bricks-brickpool3-tpool 252:2    0  14G  0 lvm  
    ├─vg_bricks-brickpool3     252:3    0  14G  0 lvm  
    └─vg_bricks-shadow_brick1  252:4    0   3G  0 lvm  /bricks/shadow_brick1
vda                            253:0    0  20G  0 disk 
├─vda1                         253:1    0  20G  0 part /
└─vda15                        253:15   0   1M  0 part 

2.1.2 Optional - add to /etc/fstab

[eedevs@gluster1-replica ~]$ sudo vim /etc/fstab 

[eedevs@gluster1-replica ~]$ cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Aug 30 23:46:07 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c4a662b3-efba-485b-84a6-26e1477a6825 /                       ext4    defaults        1 1

/dev/vg_bricks/shadow_brick1  /bricks/shadow_brick1/  xfs  rw,noatime,inode64,nouuid 1 2

2.2 repeat on repl2

[eedevs@gluster2-replica ~]$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1048576 inodes, 4194304 blocks
209715 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
128 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[eedevs@gluster2-replica ~]$ sudo pvcreate /dev/sda
WARNING: ext4 signature detected on /dev/sda at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/sda.
  Physical volume "/dev/sda" successfully created

[eedevs@gluster2-replica ~]$ sudo vgcreate vg_bricks /dev/sda
  Volume group "vg_bricks" successfully created

[eedevs@gluster2-replica ~]$ sudo lvcreate -L 14G -T vg_bricks/brickpool4
  Logical volume "brickpool4" created.

[eedevs@gluster2-replica ~]$ sudo lvcreate -V 3G -T vg_bricks/brickpool4 -n shadow_brick2
  Logical volume "shadow_brick2" created.

[eedevs@gluster2-replica ~]$ sudo mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick2
meta-data=/dev/vg_bricks/shadow_brick2 isize=512    agcount=8, agsize=98288 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=786304, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[eedevs@gluster2-replica ~]$ sudo mkdir -p /bricks/shadow_brick2

[eedevs@gluster2-replica ~]$ sudo mount /dev/vg_bricks/shadow_brick2 /bricks/shadow_brick2/

[eedevs@gluster2-replica ~]$ sudo mkdir /bricks/shadow_brick2/brick

[eedevs@gluster2-replica ~]$ lsblk
NAME                           MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                              8:0    0  16G  0 disk 
├─vg_bricks-brickpool4_tmeta   252:0    0  16M  0 lvm  
│ └─vg_bricks-brickpool4-tpool 252:2    0  14G  0 lvm  
│   ├─vg_bricks-brickpool4     252:3    0  14G  0 lvm  
│   └─vg_bricks-shadow_brick2  252:4    0   3G  0 lvm  /bricks/shadow_brick2
└─vg_bricks-brickpool4_tdata   252:1    0  14G  0 lvm  
  └─vg_bricks-brickpool4-tpool 252:2    0  14G  0 lvm  
    ├─vg_bricks-brickpool4     252:3    0  14G  0 lvm  
    └─vg_bricks-shadow_brick2  252:4    0   3G  0 lvm  /bricks/shadow_brick2
vda                            253:0    0  20G  0 disk 
├─vda1                         253:1    0  20G  0 part /
└─vda15                        253:15   0   1M  0 part 

2.2.1 Optional - mount to /etc/fstab

[eedevs@gluster2-replica ~]$ sudo vim /etc/fstab 

[eedevs@gluster2-replica ~]$ cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Aug 30 23:46:07 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c4a662b3-efba-485b-84a6-26e1477a6825 /                       ext4    defaults        1 1

/dev/vg_bricks/shadow_brick2  /bricks/shadow_brick2/  xfs  rw,noatime,inode64,nouuid 1 2

3. Create Replicated Volume using below gluster command on repl1

[eedevs@gluster1-replica ~]$ sudo gluster volume create shadowvol replica 2 repl1:/bricks/shadow_brick1/brick repl2:/bricks/shadow_brick2/brick
[sudo] password for eedevs: 
volume create: shadowvol: success: please start the volume to access data

[eedevs@gluster1-replica ~]$ sudo gluster volume start shadowvol
volume start: shadowvol: success

[eedevs@gluster1-replica ~]$ sudo gluster volume status
Status of volume: distvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/bricks/dist_brick1/brick     49152     0          Y       3916 
Brick server2:/bricks/dist_brick2/brick     49152     0          Y       3872 

Task Status of Volume distvol
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: shadowvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick repl1:/bricks/shadow_brick1/brick     49152     0          Y       6561 
Brick repl2:/bricks/shadow_brick2/brick     49152     0          Y       7342 
Self-heal Daemon on localhost               N/A       N/A        Y       6581 
Self-heal Daemon on gluster1.nyc.eedevs     N/A       N/A        Y       5722 
Self-heal Daemon on server2                 N/A       N/A        Y       5634 
Self-heal Daemon on repl2                   N/A       N/A        Y       7364 

Task Status of Volume shadowvol
------------------------------------------------------------------------------
There are no active volume tasks

[eedevs@gluster1-replica ~]$ sudo gluster volume info shadowvol

Volume Name: shadowvol
Type: Replicate
Volume ID: 329ce3e4-8a9d-48dd-9dd7-e026dbbbd3ac
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: repl1:/bricks/shadow_brick1/brick
Brick2: repl2:/bricks/shadow_brick2/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

3.1 Set nfs.disable to off to access mount over NFS on repl1

[eedevs@gluster1-replica ~]$ sudo gluster volume set shadowvol nfs.disable off

volume set: success

4. On client set Defaultvers=3 in nfsmount.conf

4.1 Setup nfs and firewall

[eedevs@gluster-client ~]$ tee gfs.fw <<-'EOF'
sudo yum -y install vim
sudo yum -y install nfs-utils
sudo systemctl enable firewalld
sudo systemctl start firewalld
sudo firewall-cmd --permanent --zone=public --add-port=24007-24008/tcp
sudo firewall-cmd --permanent --zone=public --add-port=24009/tcp
sudo firewall-cmd --permanent --zone=public --add-service=nfs --add-service=samba --add-service=samba-client
sudo firewall-cmd --permanent --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp
sudo systemctl reload firewalld
sudo systemctl status firewalld
sudo systemctl stop nfs-lock.service
sudo systemctl stop nfs.target
sudo systemctl disable nfs.target
sudo systemctl start rpcbind.service
EOF


[eedevs@gluster-client ~]$ sudo sh gfs.fw

4.2 Change nfs defaultvers

[eedevs@gluster-client ~]$ sudo sed -i 's/# Defaultvers=4/Defaultvers=3/' /etc/nfsmount.conf

[eedevs@gluster-client ~]$ grep Defaultver /etc/nfsmount.conf

Defaultvers=3

4.3 Mount volume

[eedevs@gluster-client ~]$ sudo mkdir /mnt/shadowvol

[eedevs@gluster-client ~]$ sudo mount -t nfs -o vers=3 repl2:/shadowvol /mnt/shadowvol/

[eedevs@gluster-client ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda     253:0    0  20G  0 disk 
├─vda1  253:1    0  20G  0 part /
└─vda15 253:15   0   1M  0 part 

[eedevs@gluster-client ~]$ df -Th
Filesystem       Type            Size  Used Avail Use% Mounted on
/dev/vda1        ext4             20G  1.2G   18G   7% /
devtmpfs         devtmpfs        236M     0  236M   0% /dev
tmpfs            tmpfs           245M     0  245M   0% /dev/shm
tmpfs            tmpfs           245M   21M  225M   9% /run
tmpfs            tmpfs           245M     0  245M   0% /sys/fs/cgroup
tmpfs            tmpfs            49M     0   49M   0% /run/user/1000
server1:/distvol fuse.glusterfs  6.0G   66M  6.0G   2% /mnt/distvol

repl2:/shadowvol nfs             3.0G   33M  3.0G   2% /mnt/shadowvol

4.3 Optional - add mount to /etc/fstab

[eedevs@gluster-client ~]$ sudo vim /etc/fstab 

[eedevs@gluster-client ~]$ cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Aug 30 23:46:07 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c4a662b3-efba-485b-84a6-26e1477a6825 /                       ext4    defaults        1 1
server1:/distvol  /mnt/distvol    glusterfs     _netdev    0 0

repl3:/shadowvol  /mnt/shadowvol/  nfs vers=3  0 0

5. Verify by adding a file on the client and confirm that it is available on both repl1 and repl2 servers

5.1 on client

[eedevs@gluster-client ~]$ sudo touch /mnt/shadowvol/replicated

[eedevs@gluster-client ~]$ ls -lrt /mnt/shadowvol/replicated 
-rw-r--r-- 1 root root 0 Sep 10 08:53 /mnt/shadowvol/replicated

5.2 on repl1

[eedevs@gluster1-replica ~]$ ls -lrt /bricks/shadow_brick1/brick/
total 0
-rw-r--r-- 2 root root 0 Sep 10 08:53 replicated

[eedevs@gluster1-replica ~]$ sudo lvdisplay
[sudo] password for eedevs: 
  --- Logical volume ---
  LV Name                brickpool3
  VG Name                vg_bricks
  LV UUID                G9E8Wa-V0Kr-tY8m-fUiW-hotH-gs8P-4ZoV3E
  LV Write Access        read/write
  LV Creation host, time gluster1-replica.nyc.eedevs, 2016-09-10 08:30:27 +0000
  LV Pool metadata       brickpool3_tmeta
  LV Pool data           brickpool3_tdata
  LV Status              available
  # open                 2
  LV Size                14.00 GiB
  Allocated pool data    0.08%
  Allocated metadata     0.59%
  Current LE             3584
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/vg_bricks/shadow_brick1
  LV Name                shadow_brick1
  VG Name                vg_bricks
  LV UUID                LKpKJ5-SeSW-xgts-QqMp-hpch-eW1G-Or2ElN
  LV Write Access        read/write
  LV Creation host, time gluster1-replica.nyc.eedevs, 2016-09-10 08:30:32 +0000
  LV Pool name           brickpool3
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Mapped size            0.36%
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192

  Block device           252:4

5.2 on repl2

[eedevs@gluster2-replica ~]$ ls -lrt /bricks/shadow_brick2/brick/
total 0

-rw-r--r-- 2 root root 0 Sep 10 08:53 replicated

[eedevs@gluster2-replica ~]$ sudo lvdisplay 
[sudo] password for eedevs: 
  --- Logical volume ---
  LV Name                brickpool4
  VG Name                vg_bricks
  LV UUID                cpKfgV-EePo-U8HX-4cIe-om3L-Y243-r4Xm1Q
  LV Write Access        read/write
  LV Creation host, time gluster2-replica.nyc.eedevs, 2016-09-10 08:35:29 +0000
  LV Pool metadata       brickpool4_tmeta
  LV Pool data           brickpool4_tdata
  LV Status              available
  # open                 2
  LV Size                14.00 GiB
  Allocated pool data    0.08%
  Allocated metadata     0.59%
  Current LE             3584
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/vg_bricks/shadow_brick2
  LV Name                shadow_brick2
  VG Name                vg_bricks
  LV UUID                PMP3tn-gIHv-4h5o-JMcq-7NWx-IlxF-PokaaL
  LV Write Access        read/write
  LV Creation host, time gluster2-replica.nyc.eedevs, 2016-09-10 08:35:36 +0000
  LV Pool name           brickpool4
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Mapped size            0.36%
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192

  Block device           252:4



Friday, 9 September 2016

Scalable network filesystem using GlusterFS on Centos7 using Digital Ocean and Block Storage


Today’s guide goes through how to setup a scalable network filesystem using GlusterFS. 

GlusterFS is suitable for high intensive tasks such as storing your data and media in the cloud for whatever purpose you need it for. This is also an Open Source software. 

First section will cover:
  1. Installation of glusterfs
  2. Server setup which hosts the actual file system in which data will be stored
  3. Client setup which mounts the volume
  4. Creation of a basic unit of storage called Brick
  5. Creation of a cluster of linked servers working together to form a single computer
  6. Creation of a distributed File System that allows your clients to concurrently access data over a computer network
  7. Creation of a logical collection of bricks
  8. Installation of Fuse to allow non-privileged users to create their own filesystem without editing kernel code

For this exercise we will be deploying 5 virtual machines using DigitalOcean. 

We will be creating the VMs in New York to make use of their new Block Storage feature to attach additional storage to be used to setup the gluster volumes.

To make it easy for you to follow I have:

  • highlighted commands in BLUE
  • highlighted expected output in GREEN
  • made server name in bold

Prerequisite:

  1. Basic command on linux commands and remote access to servers
  2. This demo can be followed using local VMs using VirtualBox or VM Fusion etc
  3. Understand of LVM

Server used:

  1. gluster-client.nyc.eedevs 512 MB / 20 GB Disk / NYC1 - CentOS 7.2 x64
  2. gluster2-replica.nyc.eedevs 512 MB / 20 GB Disk + 16 GB / NYC1 - CentOS 7.2 x64
  3. gluster1-replica.nyc.eedevs 512 MB / 20 GB Disk + 16 GB / NYC1 - CentOS 7.2 x64
  4. gluster2.nyc.eedevs 512 MB / 20 GB Disk + 16 GB / NYC1 - CentOS 7.2 x64
  5. gluster1.nyc.eedevs 512 MB / 20 GB Disk + 16 GB / NYC1 - CentOS 7.2 x64

Useful links:


Virtual Machines

Block Storage

1. VM user setup

On VMs

Prepare the servers by creating a new user account and disabling root access

$ tee setup <<-'EOF'
echo "eedevs    ALL=(ALL)   ALL" >> /etc/sudoers
sed -i 's/#PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
systemctl reload sshd.service
useradd -m eedevs
passwd eedevs
EOF
$ sh setup

Install glusterfs-server on server1 and server2


$ tee gfs <<-'EOF'
sudo yum -y install wget
sudo yum -y install centos-release-gluster -y
sudo yum -y install epel-release -y
sudo yum -y install glusterfs-server -y
sudo systemctl start glusterd
sudo systemctl enable glusterd
EOF
$ sudo sh gfs

Open up the firewall so that servers can communicate and form a storage cluster (trusted pool)  on server1 and server2


$ tee gfs.fw <<-'EOF'
sudo yum -y install vim
sudo yum -y install nfs-utils
sudo systemctl enable firewalld
sudo systemctl start firewalld
sudo firewall-cmd --permanent --zone=public --add-port=24007-24008/tcp
sudo firewall-cmd --permanent --zone=public --add-port=24009/tcp
sudo firewall-cmd --permanent --zone=public --add-service=nfs --add-service=samba --add-service=samba-client
sudo firewall-cmd --permanent --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp
sudo systemctl reload firewalld
sudo systemctl status firewalld
sudo systemctl stop nfs-lock.service
sudo systemctl stop nfs.target
sudo systemctl disable nfs.target
sudo systemctl start rpcbind.service
EOF
$ sudo sh gfs.fw

Update /etc/hosts with the list of VMs (for those without a DNS server setup) on all servers


$ tee hosts <<'EOF'
sudo echo "159.203.180.188 gluster-client.nyc.eedevs client" >> /etc/hosts
sudo echo "159.203.180.42 gluster2-replica.nyc.eedevs repl2" >> /etc/hosts
sudo echo "162.243.173.248 gluster1-replica.nyc.eedevs repl1" >> /etc/hosts
sudo echo "162.243.168.90 gluster2.nyc.eedevs server2" >> /etc/hosts
sudo echo "192.241.156.61 gluster1.nyc.eedevs server1" >> /etc/hosts
EOF
$ sudo sh hosts

2 Create A trusted storage pool between server1 and server2

On server1
2.1.1 create trusted pool using gluster peer command


[eedevs@gluster1 ~]$ sudo gluster peer probe server2

2.1.2 Time to create the Brick using the block storage created on Digital Ocean


[eedevs@gluster1 ~]$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-01-s
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1048576 inodes, 4194304 blocks
209715 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
128 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   


[eedevs@gluster1 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  16G  0 disk 
vda     253:0    0  20G  0 disk 
├─vda1  253:1    0  20G  0 part /
└─vda15 253:15   0   1M  0 part


[eedevs@gluster1 ~]$ sudo pvcreate /dev/sda /dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2

[eedevs@gluster1 ~]$ sudo vgcreate vg_bricks /dev/sda

[eedevs@gluster1 ~]$ sudo lvcreate -L 14G -T vg_bricks/brickpool1

[eedevs@gluster1 ~]$ lsblk 
NAME                         MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                            8:0    0  16G  0 disk 
├─vg_bricks-brickpool1_tmeta 252:0    0  16M  0 lvm  
│ └─vg_bricks-brickpool1     252:2    0  14G  0 lvm  
└─vg_bricks-brickpool1_tdata 252:1    0  14G  0 lvm  
  └─vg_bricks-brickpool1     252:2    0  14G  0 lvm 

2.1.3 Create logical volume of 3 Gb


[eedevs@gluster1 ~]$ sudo lvcreate -V 3G -T vg_bricks/brickpool1 -n dist_brick1

[eedevs@gluster1 ~]$ sudo mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick1

[eedevs@gluster1 ~]$ sudo mkdir -p /bricks/dist_brick1

[eedevs@gluster1 ~]$ sudo mount /dev/vg_bricks/dist_brick1 /bricks/dist_brick1/

2.1.3.1 add to fstab for permanent 


[eedevs@gluster1 ~]$ vim /etc/fstab

/dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2

2.1.4 create mount point


[eedevs@gluster1 ~]$ sudo mkdir /bricks/dist_brick1/brick

[eedevs@gluster1 ~]$ lsblk 
NAME                           MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                              8:0    0  16G  0 disk 
├─vg_bricks-brickpool1_tmeta   252:0    0  16M  0 lvm  
│ └─vg_bricks-brickpool1-tpool 252:2    0  14G  0 lvm  
│   ├─vg_bricks-brickpool1     252:3    0  14G  0 lvm  
│   └─vg_bricks-dist_brick1    252:4    0   3G  0 lvm  /bricks/dist_brick1
└─vg_bricks-brickpool1_tdata   252:1    0  14G  0 lvm  
  └─vg_bricks-brickpool1-tpool 252:2    0  14G  0 lvm  
    ├─vg_bricks-brickpool1     252:3    0  14G  0 lvm  
    └─vg_bricks-dist_brick1    252:4    0   3G  0 lvm  /bricks/dist_brick1



2.2 Repeat on server2


[eedevs@gluster2 ~]$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_volume-nyc1-02-s
[sudo] password for eedevs: 
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1048576 inodes, 4194304 blocks
209715 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
128 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[eedevs@gluster2 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  16G  0 disk 
vda     253:0    0  20G  0 disk 
├─vda1  253:1    0  20G  0 part /
└─vda15 253:15   0   1M  0 part 

[eedevs@gluster2 ~]$ sudo pvcreate /dev/sda
WARNING: ext4 signature detected on /dev/sda at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/sda.
  Physical volume "/dev/sda" successfully created

[eedevs@gluster2 ~]$ sudo vgcreate vg_bricks /dev/sda
  Volume group "vg_bricks" successfully created

[eedevs@gluster2 ~]$ sudo lvcreate -L 14G -T vg_bricks/brickpool2
  Logical volume "brickpool2" created.

[eedevs@gluster2 ~]$ sudo lvcreate -V 3G -T vg_bricks/brickpool2 -n dist_brick2
  Logical volume "dist_brick2" created.

[eedevs@gluster2 ~]$ sudo mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick2
meta-data=/dev/vg_bricks/dist_brick2 isize=512    agcount=8, agsize=98288 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=786304, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[eedevs@gluster2 ~]$ sudo mkdir -p /bricks/dist_brick2

[eedevs@gluster2 ~]$ sudo mount /dev/vg_bricks/dist_brick2 /bricks/dist_brick2/

[eedevs@gluster2 ~]$ sudo mkdir /bricks/dist_brick2/brick

[eedevs@gluster2 ~]$ lsblk 
NAME                           MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                              8:0    0  16G  0 disk 
├─vg_bricks-brickpool2_tmeta   252:0    0  16M  0 lvm  
│ └─vg_bricks-brickpool2-tpool 252:2    0  14G  0 lvm  
│   ├─vg_bricks-brickpool2     252:3    0  14G  0 lvm  
│   └─vg_bricks-dist_brick2    252:4    0   3G  0 lvm  /bricks/dist_brick2
└─vg_bricks-brickpool2_tdata   252:1    0  14G  0 lvm  
  └─vg_bricks-brickpool2-tpool 252:2    0  14G  0 lvm  
    ├─vg_bricks-brickpool2     252:3    0  14G  0 lvm  
    └─vg_bricks-dist_brick2    252:4    0   3G  0 lvm  /bricks/dist_brick2
vda                            253:0    0  20G  0 disk 
├─vda1                         253:1    0  20G  0 part /
└─vda15                        253:15   0   1M  0 part


3 Create distributed volume on server1


[eedevs@gluster1 ~]$ sudo gluster volume create distvol server1:/bricks/dist_brick1/brick server2:/bricks/dist_brick2/brick
volume create: distvol: success: please start the volume to access data

[eedevs@gluster1 ~]$ sudo gluster volume start distvol
volume start: distvol: success

[eedevs@gluster1 ~]$ sudo gluster volume info distvol

Volume Name: distvol
Type: Distribute
Volume ID: 2bc47037-573a-49f3-a088-cfae97ad3c96
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: server1:/bricks/dist_brick1/brick
Brick2: server2:/bricks/dist_brick2/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

4. Mount the distributed volume on the client server. First install glusterfs-fuse package.



[eedevs@gluster-client ~]$ sudo yum install glusterfs-fuse -y

[eedevs@gluster-client ~]$ sudo mount -t glusterfs -o acl server1:/distvol /mnt/distvol/

[eedevs@gluster-client ~]$ df -Th
Filesystem       Type            Size  Used Avail Use% Mounted on
/dev/vda1        ext4             20G  1.2G   18G   6% /
devtmpfs         devtmpfs        236M     0  236M   0% /dev
tmpfs            tmpfs           245M     0  245M   0% /dev/shm
tmpfs            tmpfs           245M  8.3M  237M   4% /run
tmpfs            tmpfs           245M     0  245M   0% /sys/fs/cgroup
tmpfs            tmpfs            49M     0   49M   0% /run/user/0
tmpfs            tmpfs            49M     0   49M   0% /run/user/1000
server1:/distvol fuse.glusterfs  6.0G   66M  6.0G   2% /mnt/distvol

5. Verify by creating a file on client and this should be replicated on both gluster servers. It will be available on one of the gluster servers


[eedevs@gluster-client distvol]$ sudo touch hello
[eedevs@gluster-client distvol]$ ls -l
total 0
-rw-r--r-- 1 root root 0 Sep  9 19:48 hello

[eedevs@gluster2 brick]$ cd /bricks/dist_brick2/brick/
[eedevs@gluster2 brick]$ ls -lrt
total 0
-rw-r--r-- 2 root root 0 Sep  9 19:48 hello

6. Check the current state of gluster


[eedevs@gluster1 brick]$ sudo gluster peer status
[sudo] password for eedevs: 
Number of Peers: 1

Hostname: server2
Uuid: 2821244c-1f0b-4730-bc19-44fe02aba1b5
State: Peer in Cluster (Connected)


[eedevs@gluster1 brick]$ sudo gluster volume status
Status of volume: distvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/bricks/dist_brick1/brick     49152     0          Y       3916 
Brick server2:/bricks/dist_brick2/brick     49152     0          Y       3872 

Task Status of Volume distvol
------------------------------------------------------------------------------
There are no active volume tasks

[eedevs@gluster1 brick]$ sudo gluster volume info distvol

Volume Name: distvol
Type: Distribute
Volume ID: 2bc47037-573a-49f3-a088-cfae97ad3c96
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: server1:/bricks/dist_brick1/brick
Brick2: server2:/bricks/dist_brick2/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on

nfs.disable: on

On server2


[eedevs@gluster2 brick]$ sudo gluster peer status
[sudo] password for eedevs: 
Number of Peers: 1

Hostname: gluster1.nyc.eedevs
Uuid: 6249d56b-93c8-490e-b566-6e7dd66f458d
State: Peer in Cluster (Connected)

[eedevs@gluster2 brick]$ sudo gluster volume status
Status of volume: distvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/bricks/dist_brick1/brick     49152     0          Y       3916 
Brick server2:/bricks/dist_brick2/brick     49152     0          Y       3872 

Task Status of Volume distvol
------------------------------------------------------------------------------
There are no active volume tasks

[eedevs@gluster2 brick]$ sudo gluster volume info distvol

Volume Name: distvol
Type: Distribute
Volume ID: 2bc47037-573a-49f3-a088-cfae97ad3c96
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: server1:/bricks/dist_brick1/brick
Brick2: server2:/bricks/dist_brick2/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on

nfs.disable: on