GlusterFS 3.2.5 on Ubuntu 12.04

De Wiki do Bernardino
Ir para: navegação, pesquisa

Environment: kvm host running three Ubuntu 12.04 virtual machines. Software: Vanilla Gluster 3.3.5 (right out of precise’s repo) hostnames resolve over the network but i’ve added ip’s here to keep things clear. Servers: gluster1 192.168.6.20 and gluster2 192.168.6.21 Volume name: glustervolume1 Client: gluster3 192.168.6.22 will have the glusterfs-client installed and will mount glustervolume1.

Fist thing I did after making fresh clones of my prepared machines was to add another Virtrio drive to each machine

[Shutdown the server -> Open Servers console -> Click on the lighbuld -> Add Hardware -> Storage -> Device Type -> Virtio Disk -> Click Finish -> Start server]

Prepare the disk

#Find new drive
fdisk -l
#Format new drive
fdisk /dev/vdb

[n->p->1->ENTER->ENTER->t->8e->ENTER->p->VERIFY->w]

There is no excuse, use LVM

pvcreate /dev/vdb1
vgcreate glusterfs /dev/vdb1
lvcreate -l 100%FREE -n first-disk glusterfs

Make the file system

root@gluster1:~# mkfs.xfs /dev/glusterfs/first-disk
meta-data=/dev/glusterfs/first-disk isize=256    agcount=4, agsize=327424 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=1309696, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0 

Mount the file system

echo "/dev/glusterfs/first-disk /export xfs defaults 1 2" >> /etc/fstab
mount -a

TODO Now I should add a second network interface dedicated to the gluster-cluster. Having this is best practice when doing disk IO over the network. But on my workstation its just an exercise…. I’ll add it in later.

Install glusterfs-server on gluster 1 and 2

root@gluster1&2:~#apt-get install glusterfs-server 

Adding gluster2 as a peer from gluster1

root@gluster1:~# gluster peer probe gluster2
root@gluster1:~# gluster peer status
Number of Peers: 1
Hostname: gluster2
Uuid: f5aefb7c-56aa-4220-a114-f14f1d1bf278

Now that we have hooked our machines together, create our first volume (A full list of gluster options are at the bottom)

root@gluster1:~# gluster volume create glustervolume1 transport tcp gluster1:/export/brick1 gluster2:/export/brick1
root@gluster1:~# gluster volume info
 
Volume Name: glustervolume1
Type: Replicate
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/export/brick1
Brick2: gluster2:/export/brick1

Allow machines from our subnet to mount glustervolume1

root@gluster1:~#gluster volume set glustervolume1 auth.allow 192.168.6.*

Start the volume

root@gluster1:~# gluster volume start  glustervolume1
 
Volume Name: glustervolume1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.6.20:/export/brick1
Brick2: 192.168.6.21:/export/brick1
Options Reconfigured:
auth.allow: 192.168.*

Client setup

root@gluster3:~# apt-get install glusterfs-client
mount -t glusterfs gluster1:/glustervolume1 /mnt
echo "gluster1:/glustervolume1 /mnt/ glusterfs defaults,_netdev 0 0" >> /etc/fstab

Easy Right? Okay so what else can we do? What about replication of data For HA? What exactly are the storage options for gluster? Lets go over them below.

With no arguments gluster creates distributed volumes, simply list any number of servers:/bricks and when you mount the gluster you will see one big drive:

gluster volume create glustervolume transport tcp gluster1:/export/brick1 gluster2:/export/brick1

For HA we need replication, simply add the replica 2 stanza

gluster volume create glustervolume1 replica 2 transport tcp gluster1:/export/brick1 gluster2:/export/brick1

What about distributed replication? (scaling storage size and having HA) Its quite simple, you do the same to create your volume (replica 2) but add four machines rather than two into the cluster. Voila, gluster takes care of it – distributed replication.)

gluster volume create glustervolume1 replica 2 transport tcp gluster1:/export/brick1 gluster2:/export/brick1 gluster3:/export/brick1 gluster4:/export/brick1

For striped(increased read preformance) replace replica with stripe

gluster volume create glustervolume1 stripe 2 transport tcp gluster1:/export/brick1 gluster2:/export/brick1

Finaly, distributed – stripped – replicate (yes this will take many nodes)

gluster volume create glustervolume1 stripe2 replica 2 transport tcp gluster1:/export/brick1 gluster2:/export/brick1 gluster3:/export/brick1 gluster4:/export/brick1  gluster5:/export/brick1 gluster6:/export/brick1 gluster7:/export/brick1 gluster8:/export/brick1
    1. NOTES##

Delete a volume

gluster volume delete glustervolume1

Debugging

tail -f /var/log/glusterfs/*

References [1]