Installing DRBD On Hardy!

DRBD (Distributed Replicated Block Device) is a technology that is used to replicate data over TCP/IP. It is used to build HA Clusters and it can be seen as a RAID-1 implementation over the network.

As you may all know, the DRBD kernel module has now been included into Hardy Heron Server Edition’s kernel, so there is no more source downloading and compiling, which makes it easier to install and configure. Here I’ll show you how to install and and make a simple configuration of DRBD, using one resource (testing). I’ll not cover how to install and configure heartbeat for automatic failover (This will be showed in a next post).

First of all, we will have to install Ubuntu Hardy Heron Server Edition on to servers and manually edit the partition table. We do this to leave FREE SPACE that we will be used later on as the block device for DRBD. If you’ve seen the DRBD + NFS HowTo on HowToForge.com, creating the partitions for DRBD and leaving them unmounted will NOT work, and we won’t we able to create the resource for DRBD. This is why we leave the FREE SPACE, and we will create the partition later on, when the system is installed.

So, after the installation we will have to create the partition, or partitions (in case we are creating an external partition for the meta-data, but in this case it will be internal), that DRBD will use as a block device. For this we will use fdisk and do as follows:


fdisk /dev/sda
n (to create a new partition)
l83 (to create it as logical and format it as Filesystem # 83)
w (to write the changes)

After creating the partitions we will have to REBOOT both servers so that the kernel uses the new partition table. After reboot we have to install drbd8-utils on both servers:

sudo apt-get install drbd8-utils

Now that we have drbd8-utils installed, we can now configure /etc/drbd.conf, which we will configure a simple DRBD resource, as follows:

resource testing { # name of resources

protocol C;

on drbd1 { # first server hostname
device /dev/drbd0; # Name of DRBD device
disk /dev/sda7; # Partition to use, which was created using fdisk
address 172.16.0.130:7788; # IP addres and port number used by drbd
meta-disk internal; # where to store metadata meta-data
}

on drbd2 { # second server hostname
device /dev/drbd0;
disk /dev/sda7;
address 172.16.0.131:7788;
meta-disk internal;
}

disk {
on-io-error detach;
}

net {
max-buffers 2048;
ko-count 4;
}

syncer {
rate 10M;
al-extents 257;
}

startup {
wfc-timeout 0;
degr-wfc-timeout 120; # 2 minutos.
}
}

Note that we are using drbd1 and drbd2 as hostnames. This hostnames must be configured and the servers should be able to ping the other via those hostnames (that means we need to have a DNS server or configure hosts for both servers in /etc/hosts).

After creating the configuration in /etc/drbd.conf, we now can create the DRBD resources. For this we issue the following in both servers:

sudo drbdadm create-md testing

After issuing this, we will be asked for confirmation to create the meta-data in the block device.

Now we have to power off both servers. After powering them off, we start our first server and we will see something similar to this:

After confirming with ‘yes’, we can now start the second server. After the second server is running. both nodes resources are secondary, so we have to make one of them primary. For this, we issue the following on the server we would like to have the resource as primary:

drbdadm -- --overwrite-data-of-peer primary all

We verify this by issuing:

cat /proc/drbd

And this should show something like this:

Well, up to this point, i’ve have showed you how to install and configure DRBD on Hardy, and how to make one of the servers have its resource as primary. But, we still don’t have automatic failover or automatic mounting. In a next post I’ll show how to configure heartbeat to have automatic failover and to take control of the resources, aswell as configuring STONITH to use the meatware device, so that we won’t have a split-brain condition (or at least try). I’ll also show how to configure NFS and MySQL to use this DRBD resource.

BTW, if you have questions you know where to find me :).

I now have a MOTU Mentor!

Today, just few hours ago I received a notification from Nicolás Valcarcel (nxvl), who is part of the MOTU Mentors Reception Board, telling me that I have been assigned a MOTU Mentor for the Junior Mentoring Program.

When he told me that I had a mentor already, I just felt that things are going the way I planned :). My mentor is Steven Stalcup (vorian), and even though he loves KDE and I love GNOME, we will get along :). Every package is good to work with while someone is learning, but I’m really interested on Server Related stuff.

So if anyone has a suggestion for my learning process don’t hesitate to contact me. I set up a wikipage,  where I’ll keep track of everything I’ll do through the mentoring process. I already have a first task… read the Debian New Maintainer’s Guide. Should be easy since I already have worked with some tools and done some merges. But anyways… wish me luck :).

Posted in Planet, Ubuntu. . 1 Comment »

Ubuntu in My Thesis: Part 2

Well as you may know, I had to support my thesis yesterday (July 16) in Arequipa – Peru, at “Universidad Católica de Santa María”. I’ve been approved and finally I became an engineer in Systems Engineering.

As I said in a previous post, my thesis was about High Availability Clusters. Actually, it was about designing a Model to Implement High Availability Cluster for Web Servers. My model consists on three Layers, which provide fault tolerance (High Availability) and scalability. Everything was done using 6 Ubuntu Servers virtualized on VMWare.

Layer 1 consists in two servers in an active/passive mode. This Layer provides fault tolerance, which can be interpreted as high availability, but does not provide scalability. I used LVS (using DIrect Routing) to load balance between Web Servers on Layer 2. I also used Heartbeat, ipvsadm and ldirectord.

Layer 2 consists in 4 Web Servers (can be more), in an active/active mode. This layer provides high availability and scalability. This layer also allows us to use every kind of web server (including Windows Server), because load balancing is done in Layer 2 of OSI Model (thanks to the Direct Routing method).

And layer 3 consists on two servers in an active/passive mode, to provide data access. Here I used NFS and MySQL, to provide file access and database access on both servers, and I used DRBD to provide data replication between them. The monitoring was done by using Heartbeat. This layer also provides fault tolerance but not scalability. To provide fencing in this layer I used meatware, which is provided by STONITH for heartbeat.

As you can see, this is a classical implementation of High Availability Clusters using LVS. For some of you might be an everyday thing, but not here in Peru. This is because because there aren’t lots of organizations and companies, or even Linux experts, who know how to implement Linux based clusters (besides banks). My thesis allows those who want to know how to create them, and shows that Microsoft and Red Hat, are not the only OS’s where we can create High Availability Clusters. This also shows how powerful is DRBD when we don’t have the possibility to work with SANs or other data storage technology.

Thanks to all the developers who created these great tools, that allows us to create Linux Based High Availability Clusters, and also thanks to the Ubuntu Community to provide these packages. This made my life easier :D.

And well… now i’m unemployed and with nothing to do but contribute to Ubuntu 😛 (I guess i’ll search a job as a Sysadmin or Network admin). But finally i have more time to concentrate myself to learn more about Ubuntu Development.

Posted in Planet, Ubuntu. . 6 Comments »

Ubuntu in My Thesis

Finally, after 6 months of being doing nothing but partying, contributing to Ubuntu, and finishing my thesis (not working)… I’m about to support it, tomorrow (July 16) at noon (GMT -5).

My thesis is about High Availabily Clusters and I’m glad to say that it was done using Ubuntu Servers Over VMWare. Configuration and installation was kinda hard, because it was hell to administer every single Server without a centralized administration application, but it was lots of fun.

This made me realize that we really need a Cluster Administration Application for Ubuntu servers (if it does not exist already), so that LVS based clusters can be installed as simple as using Ubuntu :).

But, anyways… wish me luck… cause tomorrow I’ll become an Engineer in Systems Engineering.

Posted in Planet, Ubuntu. . 4 Comments »