UPDATE: No laptop… No phone…

My plans to work over the holidays were affected because I ended up not having my laptop available. Fortunately for me I got it back few days ago, which means I can now get back to do the Ubuntu related stuff I wanted to do (TestDrive, PowerNap, Update the Cluster Stack to the latest).

However, after getting my laptop back, I dropped my Nexus One into the water…(yeah boomer). Fortunately, it was turned off. I removed the battery, Memory Card, SIM Card, and dry it as fast as I could… I haven’t turned it up yet since I’m waiting for it to dry completely; but, to help with that, I put it into a bowl with rice. It is sitting there for the last couple of days, since rice is supposed to absorb moisture and humidity. Hopefully it works again!

UPDATE: After leaving my N1 to dry for a couple of days in rice, I decided to give it a try today and see if it works… and IT DOES!! Nothing seems to be malfunctioning… I guess I was just lucky then!!

Anyway, good thing is that I had an old phone so I’m not totally incommunicated (Just no twitting, facebook, irc from the phone when I’m not at home… I guess I’ll prolly be suffering from the abstinence syndrome…or not :) ).

Posted in Planet, Ubuntu. . 2 Comments »


Graduated!!! And imagine… 45F in Miami… It was freaking freezing when the pic was taken!!!

Posted in Planet, Ubuntu. . 4 Comments »

Crazy weekend but finally graduating…

So last Monday I finally finished all the coursework required and officially finished with my Masters in Telecom and Networking. However, it wasn’t as I expected. I wanted to relax for the whole week till my Commencement Ceremony tomorrow but… it wasn’t possible.

I realized that still had lots of stuff to do, but not for school. They were mainly for Ubuntu (Testdrive, powernap, and the cluster stack). However I aslo realized that it was the weekend that I was moving all my stuff to my girlsfriends new apartment. Anyway, unfortunately I couldn’t do all the stuff I wanted but I’m almost done with the moving.

So after the weekend, tomorrow ill hhave the satisfaction to say it’s all over (school), and will have lots of free time till I get a job to work on my Ubuntu related stuff, cause I will be stuck in MIA for the hollidays and wont be coming back to Peru to spend it with the family. But anyways at least I can say I have to do Ubuntu related work.

Posted in Planet, Ubuntu. . 2 Comments »

TestDrive: Testing an Ubuntu ISO in real Hardware??

So, last month I was reading the “Unity Desktop and maverick backport” thread at the ubuntu-devel list. The discussion at some point became about How to Test Natty (Unity Compiz specifically) in real hardware from early stages in the development cycle. So, Dustin recommended the use of TestDrive to do the testing. However, he also mention that 3D acceleration was not available in the VM’s, and his recommendation was more related to 2D testing.

So, that discussion reminded me of a proposed branch to TestDrive that was outdated, on which an option was added to be able to Launch an ISO from GRUB, by placing the ISO in an special folder, and creating an entry for GRUB’s boot menu. So, today I decided to test that feature! It works, but the code needs improvements. So, before actually working on them, I was wondering what ya’ll think?

So my question is, would it be a good idea to add that option to TestDrive to make an ISO available for booting directly from GRUB for testing in real hardware?, or not? Pros/Cons, Comments, Suggestions?

One more month to Graduate… Still Job searching…

Finally, after a really loooooooong year and a half of having started my studies in the MS of Telecommunications & Networking at Florida International University, I’m about to graduate.

I officially have one more month of class left. This is an exiting time on which I have to decide what’s gonna happen to me in the near future. I’ve been offered the possibility to continue my studies with a PhD in Computer Science, but, I’m not sure just yet what I’m gonna do. However, I’m still job searching.

As I mentioned before, I’m looking for a job in Open SourceNetwork Administration (given to my studies) or Linux System Administration/Engineering, but I’d really like to stay with Open Source, and stay really close to my passion,Ubuntu. Furthermore, I’d also like to continue to work with HA Clustering, loadbalancing, etc, or other technologies, such as Virtualization, Cloud Computing (even though that I might not have much experience, but I’m a quick learner), either on Implementation as a SysAdmin, or as a Developer, as long as it keeps me close to Ubuntu.

Anyways, I just hope things are cleared in the next few days and turn to the better. Wish me luck :).

Posted in Planet, Ubuntu. . 2 Comments »

UPDATED: Cluster Stack and PowerNap sessions at UDS-N

At UDS-N (Natty) I’ll be leading these two sessions:

  • Cluster Stack for Natty
    The Cluster stack session will be divided in two main parts. The first part we will discuss the current status of the Cluster Stack in Ubuntu, things that have been and haven’t been achieved so far, as well as the features we would like to see in the future. The second part of the session will be concentrated in the integration of the Cluster Stack with the Ubuntu Enterprise Cloud (UEC).

    The outcome of the discussion is:

    • Merge library split changes for cluster-glue, pacemaker from debian packages.
    • Complete MIR requests to finally get packages into Main.
    • Improve documentation, and add it to the Ubuntu Server Guide.
      • Docs: HA Apache2, HA MySQL, CLVM, Recommend a Cluster FS – OCFS2, Fecing, etc.
    • Automated Deployment (Look into deploying with puppet.).
      • Simple: Join a simple cluster/Virtual IP.
      • Advanced: CLVM, DRBD, Filesystems.
    • Meta-packages / Tasksel to install and join a Cluster.
    • HA for UEC.
      • Continue with the research on HA for CLC, Walrus, CC, SC
      • Eventually, write OCF RA’s for above components.
    • Investigate on providing HA *inside* the Cloud.

  • PowerNap Improvements
    PowerNap is a power management tool, created by Dustin Kirkland, that has been integrated with the Ubuntu Enterprise Cloud. However, this sessions we will discuss how to extend the functionality of PowerNap to make it available for other kinds of environments, as well as providing alternative methods of power savings for Servers.

    The outcome of the discussion is:

    • Investigate how PowerNap could tap into Upstart to monitor processes in an event driven manner rather than polling /proc.
    • Use pm-powersave for PowerNap new power save mode.
    • Contribute any new actions to pm-utils (rather keeping in PowerNap)
    • Use event based monitoring for input polling (limited to keyboard and mouse)
    • Get network monitor matching the MAC in the WoL.
    • Provide a powerwaked to track machines registered and be able to schedule poweroff’s/updates.

If you would like to know more and you are not attending to UDS personally, you can still participate remotely. Or, you can just show up at the session. I hope to see there anyone who’s interested.

Florida LoCo Team UDS Host Party

If you haven’t heard yet, the Florida LoCo Team will be hosting a welcoming party for all UDS Attendees.

  • Who – UDS Attendees, Ubuntu Florida Team, and Guests
  • What – Pizza and a Movie night
  • When – October 25th from 7:00 – 10:00/10:30
  • Where – Grand Sierra D, the Plenary room.
  • Why – We would like to welcome everyone to Florida!

Please REGISTER your attendance HERE ASAP. Thank you!!

Posted in Planet, Ubuntu. . 1 Comment »

UPDATE: High Availability for the Ubuntu Enterprise Cloud (UEC) – Cloud Controller (CLC)

So I finally had the time to write the OCF Resource Agent for the Cloud Controller as promised. It is an early Resource Agent and currently is tested for CLC’s running in Ubuntu ONLY (UEC).

But first what is an OCF Resource Agent? An OCF RA is an executable script that is used to manage a resource within a cluster. In this case, this RA is a script that will manage the resource (Cloud Controller) in a 2 node pacemaker based HA Cluster. The resource starts, stops, and monitors the service (Cloud Controller) when the Cluster Resource Manager (Pacemaker) indicates it to (This means that upstart will NOT start the CLC).

Now that we all know what are OCF RA’s, let’s test it: First download the RA from HERE and move the resource to:

wget -c http://people.ubuntu.com/~andreserl/eucaclc
sudo mkdir /usr/lib/ocf/resource.d/ubuntu
sudo mv eucaclc /usr/lib/ocf/resource.d/ubuntu/eucaclc
sudo chmod 755 /usr/lib/ocf/resource.d/ubuntu/eucaclc

Then, change the cluster configuration (sudo crm configure edit) for res_uec resource as follows:

primitive res_uec ocf:ubuntu:eucaclc op monitor interval=”20s”

And the new RA should start the Cloud Controller automatically and keep monitoring it.

NOTE: Please note that this Resource Agent is an initial draft and might be buggy. If you find any bugs or things don’t work as expected, please don’t hesitate to contact me.

At UDS-M, I raised the concern of the lack of High Availability for the Ubuntu Enterprise Cloud (UEC). As part as the Cluster Stack Blueprint, the effort of trying to bring HA to UEC was defined, however, it was barely discussed due to the lack of time, and the work on HA for the UEC has been deferred for Natty. However, in preparation for the next release cycle, I’ve been able to setup a two node HA Cluster (Master/Slave) for the Cloud Controller (CLC).

NOTE: Note that this tutorial is an early draft and might contain typos/erros that I might have not noticed. Also, this might not also work for you, that’s why I first recommend to have a UEC up and running with one CLC, and then add the second CLC. If you need help or guidance, you know where to find me :). Also note that this is only for testing purposes!, and I’ll be moving this HowTo to an Ubuntu Wiki page soon since the formatting seems to be somehow annoying :).

1. Installation Considerations
I’ll show you how to configure two UEC (eucalyptus) Cloud Controllers in High Availability (Active/Passive) , using the HA Clustering tools (Pacemaker, Heartbeat), and DRBD for replication between CLC’s. This is shown in the following image.

The setup I used is a 4 node setup, 1 CLC, 1 Walrus, 1 CC/SC, 1 NC, as it is detailed in the UEC Advanced Installation Doc, however, I installed the packages from the Ubuntu Server Installer. Now, as per the UEC Advanced Installation Doc, it is assumed that there is only one network interface (eth0) in the Cloud Controller connected to a “public network” that connects it to both, the outside world and the other components in the Cloud. However, to be able to provide HA be need the following requirements:

  • First, we need a Virtual IP (VIP) to allow both, the clients and the other Controllers to access either one of the CLC’s using that single IP. In this case, we are assuming that the “public network” is, and that the VIP is This VIP will also be used to generate the new certificates.
  • Second, we need to add a second network interface to the CLC’s to use it as a replication link between DRBD. This second interface is eth1 and will have address ranged in

2. Install Second Cloud Controller (CLC2)
Once you finish setting up the UEC and everything is working as expected, please install a second cloud controller.
Once installed, it is desirable to not start the services just yet. However, you will need to exchange the CLC ssh keys with both the CC and the Walrus as it is specified in SSH Key Authentication Setup, under STEP4 of the UEC Advanced Installation doc. Please note that this second CLC will also have two interfaces, eth0 and eth1. Leave eth1 unconfigured, but configure eth0 with an IP address in the same network as the other controllers.

3. Configure Second Network Interface
Once the two CLC’s are installed (CLC1 and CLC2), we need to configure eth1. This interface will be used as a direct link between CLC1 and CLC2 and will be used by DRBD as the replication link. In this example, we’ll be using On your /etc/network/interfaces.

On CLC1:

auth eth1
iface eth1 inet static

On CLC2:

auth eth1
iface eth1 inet static

NOTE: Do *NOT* add the gateway because it is a direct link between CLC’s. If we add it, it will create a default route the configuration of the resources will fail further along the way.

4. Setting up DRBD

Once the CLC2 is installed and configured, we need to setup DRBD for replication between CLC’s.

4.1. Create Partitions (CLC1/CLC2)
For this, we either need a new disk or disk partition. In my case, I’ll be using /dev/vdb1. Please note that both partitions need to be exactly equal in both nodes. You can create them whichever way you prefer.

4.2. Install DRBD and load module (CLC1/CLC2)
Now we need to install DRBD Utils.

sudo apt-get install drbd

Once it is installed, we need to load the kernel module, and add it is /etc/modules. Please note that DRBD Kernel Module is now included in mainline kernel.

sudo modprobe drbd
sudo -i
echo drbd >> /etc/modules

4.3. Configuring the DRBD resource (CLC1/CLC2)
Add a new resource for DRBD by editing the following file:

sudo vim /etc/drbd.d/uec-clc.res

The configuration looks similar as the following:

resource uec-clc {
device /dev/drbd0;
disk /dev/vdb1;
meta-disk internal;
on clc1 {
on clc2 {
syncer {
rate 10M;

4.4. Creating the resource (CLC1/CLC2)
Now we need to do the following on CLC1 and CLC2:

sudo drbdadm create-md uec-clc
sudo drbdadm up uec-clc

4.5. Establishing initial communication (CLC1)
Now, we need to do the following:

sudo drbdadm -- --clear-bitmap new-current-uuid uec-clc
sudo drbdadm primary uec-clc
mkfs -t ext4 /dev/drbd0

4.6. Copying the Cloud Controller Data for DRBD Replication (CLC1)
Once the DRBD nodes are in sync, we need have the data replicated between the CLC1 and the CLC2 and make the necessary changes so that they both can access the data at a given point in time. To do this, do the following in CLC1:

sudo mkdir /mnt/uecdata
sudo mount -t ext4 /dev/drbd0 /mnt/uecdata
sudo mv /var/lib/eucalyptus/ /mnt/uecdata
sudo mv /var/lib/image-store-proxy/ /mnt/uecdata
sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus
sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy
sudo umount /mnt/uecdata

What we did here is to move the Cloud Controller data to the DRBD mount point so that it get’s replicated to the second CLC, and then do a symlink from the mountpoint to the original data folders.

4.7. Preparing the second Cloud Controller (CLC2)
Once we prepared the data in CLC1, we can discard the data in CLC2, and we need to create the symlinks the same way we did in the CLC1. We do this as follows:

sudo mkdir /mnt/uecdata
sudo rm -fr /var/lib/eucalyptus
sudo rm -fr /var/lib/image-store-proxy
sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus
sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy

After this, the data will be replicated via DRBD. Whenever CLC1.

5. Setup the Cluster

5.1. Install the Cluster Tools
First we need to install the clustering tools:

sudo apt-get install heartbeat pacemaker

5.2. Configure Heartbeat
Then we need to configure Heartbeat. First, create /etc/ha.d/ha.cf and add the following:

autojoin none
mcast eth0 649 1 0
warntime 5
deadtime 15
initdead 60
keepalive 2
node clc1
node clc2
crm respawn

Then create the authentication file (/etc/ha.d/authkeys), ad add the following:

1 md5 password

and change the permissions:

sudo chmod 600 /etc/ha.d/authkeys

5.3. Removing Startup of services at boot up
We need to let the Cluster manage the resources, instead of starting them at bootup.

sudo update-rc.d -f eucalyptus remove
sudo update-rc.d -f eucalyptus-cloud remove
sudo update-rc.d -f eucalyptus-network remove
sudo update-rc.d -f image-store-proxy remove

And we also need to change the “start on” to “stop on” in the upstart configuration scripts at /etc/init/* for:


5.4. Configuring the resources
Then, we need to configure the cluster resources. For this do the following:

sudo crm configure

and paste the following:

primitive res_fs_clc ocf:heartbeat:Filesystem params device=/dev/drbd/by-res/uec-clc directory=/mnt/uecdata fstype=ext4 options=noatime
primitive res_ip_clc ocf:heartbeat:IPaddr2 params ip= cidr_netmask=24 nic=eth0
primitive res_ip_clc_src ocf:heartbeat:IPsrcaddr params ipaddress=""
primitive res_uec upstart:eucalyptus  op start timeout=120s op stop timeout=120s op monitor interval=30s
primitive res_uec_image_store_proxy lsb:image-store-proxy
group rg_uec res_fs_clc res_ip_clc res_ip_clc_src res_uec res_uec_image_store_proxy
primitive res_drbd_uec-clc ocf:linbit:drbd params drbd_resource=uec-clc
ms ms_drbd_uec res_drbd_uec-clc meta notify=true
order o_drbd_before_uec inf: ms_drbd_uec:promote rg_uec:start
colocation c_uec_on_drbd inf: rg_uec ms_drbd_uec:Master
property stonith-enabled=False
property no-quorum-policy=ignore

6. Specify the Cloud IP for the CC, NC, and in the CLC.
Once you finish the configuration above, one of the CLC’s will be the Active one and the Second will the passive one. The Cluster Resource Manager will decide which one will become the primary one. However, it is expected that CLC1 will become the primary.

Now, as specified in the UEC Advanced Installation Doc, we need to specify the Cloud Controller VIP in the CC. However it is also important to do it in the NC. This is done in /etc/eucalyptus/eucalyptus.conf by adding:


Then, log into the Web Front end (, and change the Cloud Configuration to have the VIP as the Cloud Host.

By doing this you will have the new certificates generated with the VIP, that will allow you to connect to the cloud even if the primary Cloud Controller failed, and the Second one tool control of the service.

Finally, restart the Walrus, CC/SC, and NC and enjoy.

7. Final Thoughts
The cluster resource manager is using the upstart script to manage the Cloud Controller. However, this is not optimal, and it is used for testing purposes. The creation of an OCF Resource Agent will be required to adequately start/stop and monitor eucalyptus. The OCF RA will be developed soon, and this will be discussed at Ubuntu Developer Summit – Natty.

My first UEC deployment!!

The other day, I finally decided to go ahead and deploy my own UEC. Given that I don’t have enough hardware for what I wanted to do, I did it on Virtual Machines using KVM, using a virtual network (NAT).

For my deployment, I used 4 VM’s as follows:

  • 1 Cloud Controller
  • 1 Walrus
  • 1 Cluster Controller / Storage Controller
  • 1 Node Controller

The installation process was VERY SIMPLE!! I used this tutorial (UECPackageInstallSeparate), but, instead of installing the packages manually, in VM’s that were already running, I installed them in the Ubuntu Server Installer. The installation was very straight forward. Some of the stuff were already done automatically by the UEC Installer, some other I had to do manually, such as copying ssh keys and stuff.

I must say, I’m impressed on how easy was to install a UEC, Great Integration work from the Ubuntu Server Team!!!

Anyways, after the installation, the Walrus, CC/SC were automatically registered with the CLC. However, I did face 1 single issue. The NC was not able to register!! After a while of research, I realized that the CC/SC didn’t have the *.pem keys that were available on the CLC. So I just copied the keys into the CC/SC from the CLC, and manually register the NC on the CC/SC with euca_conf –register-nodes <node-ip>. After that, I had the NC registered and working. I don’t really know why might this happened, but I was thinking it probably was because of the Virtual Network, or maybe I missed something of the tutorial… if anyone has any ideas, let me know :).

Now, the next step will be to start researching on how to provide HA for the UEC, as soon as I get a break from School (My last semester in my Masters in Telecom & Networking). So, wish me luck.

Posted in Planet, Ubuntu. . 7 Comments »

Masters almost done! Job searching again!!

After a year of being a full time student of the MS in Telecommunications & Networking at Florida International University, and having successfully finished my Google Summer of Code project for Ubuntu (TestDrive Front-end), is time for me to start job searching again!

Currently, I find myself in my last semester of class and I’ll be graduating this Fall 2010 (December 2010). This means that a new stage in my life begins and that I need to start looking for a job to start working as soon as I finish my Masters Degree!

I’m looking for a job in Open SourceNetwork Administration (given to my studies) or Linux System Administration/Engineering, but I’d really like to stay with Open Source, and stay really close to my passion, Ubuntu. Furthermore, I’d also like to continue to work with HA Clustering, loadbalancing, etc, or other technologies, such as Virtualization, Cloud Computing (even though that I might not have much experience, but I’m a quick learner), either on Implementation as a SysAdmin, or as a Developer, as long as it keeps me close to Ubuntu. That’s what makes me happy :).

Anyways, if anyone knows of something, have any job offers, or want to know more about me, my CV can be found HERE, and don’t doubt to contact me. :)

Posted in Planet, Ubuntu. . 1 Comment »