Getting Started with MAAS and Juju: MAAS Overview

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

PowerNap Improvements session at UDS-O

After the success of the improvements of PowerNap in Ubuntu Natty 11.04, we will be having another session st UDS-O Thursday the 12th at 15:00. In this session we will discuss the following:

  • Second Stage action when running in PowerSave mode.
  • Support for port-ranges in Network Monitors
  • Changing the polling monitoring system to an event based system.
  • Client/Server approach to monitor/manage PowerNap “client machines” over the network for data center wide deployments
  • Server ARP network Monitoring for Automatic Wake-up of Clients.
  • API like approach for Integration with other projects.

Everyone who’s interested are more than welcome to join! For more information, the blueprint can be found HERE.

HA Cluster Stack Session at UDS-O

Thursday the 12th at noon we will be having the HA Cluster Stack session. In the session we will discuss the following:

  • Discuss the adoption of new upstream releases of the HA Cluster Stack to include in Oneiric in preparation for the next Ubuntu LTS release.
  • Finish up work items from previous sessions (mainly documentation).
  • Gather feature requests and discuss the creation of meta-packages.
  • And, if the time allows us, I’d like to follow up with HA for OpenStack as they had a session in their Design Summit about it.

If you are interested of the Future of HA Clustering in Ubuntu, you are more than welcome to join this session. For more information the blueprint can be found HERE.

PowerNap Improvements for Natty

For all of those who don’t know, “PowerNap is a screen saver for servers except it doesn’t save your screen, it saves the environment and lowers your energy bill.” Dustin Kirkland :). PowerNap was originally created by Dustin to be integrated with (UEC), but it has been extended for Home use. Originally, it put to sleep machines (suspend, hibernate, poweroff) when a list of Processes were not found in the process table for  a determined period of time. However, during the Natty cycle improvements were made. So, PowerNap now puts to sleep (suspend, poweroff, powersave) machines that are tagged as underutilized  by a set if Monitors.

Improvements Overview

  • PowerNap, has a set of Monitors to be able to detect activity within the server and determine if it is idled or not. If it is, PowerNap will execute an ACTION. Administrators can chose what monitors to enable/disable. These are:
    • ProcessMonitor: Looks for a process in the process table.
    • IOMonitor: Monitors IO activity by process name.
    • InputMonitor: Monitors Mouse/Keyboard input activity connected to USB.
    • LoadMonitor: Monitors a server load threshold.
    • TCPMonitor: Monitors active TCP connections (i.e. SSH).
    • UDPMonitor: Monitors activity received in any user defined UDP port.
    • WoLMonitor: Monitors WoL packets on ports 7 and/or 9.
    • ConsoleMonitor: Monitors console activity.
  • The process starts when PowerNap begins monitoring for an ABSENT_PERIOD (i.e. 300secs). If within that period no activity has been detected, then PowerNap executes an ACTION.
    • Before the ACTION is taken, PowerNap enters to the GRACE_PERIOD (I.e 30 seconds), notifying the user that the ACTION will be taken in GRACE_PERIOD amount of seconds. (i.e. On second 270 PowerNap will notifies its users and the period between 270 and 300 seconds is known as GRACE_PERIOD).
  • The possible ACTIONS are:
    • Best-effort – Automatically decide between a user defined action or any of the other methods listed below (these methods rely on pm-utils)
    • Suspend (Command: pm-suspend)
    • Hibernate (Command: pm-hibernate)
    • Poweroff (Command: poweroff)
    • Powersave – Newly added method that reduces the Power Consumption (Command: pm-powersave)
  • The PowerSave method executes a set of scripts both provided by pm-utils and PowerNap. These scripts have the objective to reduce the power consumption of the machine by turning off hardware capabilities or tuning the OS. It is possible to provide any custom script as well as chose which to enable or disable. Examples of these scripts are:
    • Turn off all the CPU cores except of one.
    • Reduce the cores frequency to the lowest possible.
    • Disable WoL from Network Cards.
    • Change the NIC speed from 1Gbps to 100Mbps.
    • Turn off USB ports.
    • Disable HAL polling.
  • Now, when the PowerSave ACTION is taken, the machine keeps running in a lower power state. PowerNap keeps Monitoring until activity is detected. Once any of the Monitors detects activity, the PowerSave action is reverted.

 

PowerWake

  • PowerWake is simply a tool that sends WoL packets to an specified IP/Broadcast address to be able to wakeup a server.

 

Additional Tools

  • powernap-now: Sends a signal to the PowerNap daemon to execute the ACTION regardless of the state of the monitors.
  • powerwake-now: Sends a signal to the PowerNap daemon to wakeup during the PowerSave mode.
  • Note that these commands have to be executed in the machine running PowerNap. If this needs to be done through the network, then the command will have to be sent remotely to be executed in the machine.


The Future

  • Second Stage Action: Second Stage Action when entered into PowerSave mode. (i.e. Suspend after 2 hours after running in PowerSave mode).
  • Client/Server Model: The main Idea is to create a powerwaked Server that tracks all the machines using PowerNap in the network and is able to schedule wakeups, upstates, etc, etc.

TestDrive: Testing an Ubuntu ISO in real Hardware??

So, last month I was reading the “Unity Desktop and maverick backport” thread at the ubuntu-devel list. The discussion at some point became about How to Test Natty (Unity Compiz specifically) in real hardware from early stages in the development cycle. So, Dustin recommended the use of TestDrive to do the testing. However, he also mention that 3D acceleration was not available in the VM’s, and his recommendation was more related to 2D testing.

So, that discussion reminded me of a proposed branch to TestDrive that was outdated, on which an option was added to be able to Launch an ISO from GRUB, by placing the ISO in an special folder, and creating an entry for GRUB’s boot menu. So, today I decided to test that feature! It works, but the code needs improvements. So, before actually working on them, I was wondering what ya’ll think?

So my question is, would it be a good idea to add that option to TestDrive to make an ISO available for booting directly from GRUB for testing in real hardware?, or not? Pros/Cons, Comments, Suggestions?

UPDATED: Cluster Stack and PowerNap sessions at UDS-N

At UDS-N (Natty) I’ll be leading these two sessions:

  • Cluster Stack for Natty
    The Cluster stack session will be divided in two main parts. The first part we will discuss the current status of the Cluster Stack in Ubuntu, things that have been and haven’t been achieved so far, as well as the features we would like to see in the future. The second part of the session will be concentrated in the integration of the Cluster Stack with the Ubuntu Enterprise Cloud (UEC).

    The outcome of the discussion is:

    • Merge library split changes for cluster-glue, pacemaker from debian packages.
    • Complete MIR requests to finally get packages into Main.
    • Improve documentation, and add it to the Ubuntu Server Guide.
      • Docs: HA Apache2, HA MySQL, CLVM, Recommend a Cluster FS – OCFS2, Fecing, etc.
    • Automated Deployment (Look into deploying with puppet.).
      • Simple: Join a simple cluster/Virtual IP.
      • Advanced: CLVM, DRBD, Filesystems.
    • Meta-packages / Tasksel to install and join a Cluster.
    • HA for UEC.
      • Continue with the research on HA for CLC, Walrus, CC, SC
      • Eventually, write OCF RA’s for above components.
    • Investigate on providing HA *inside* the Cloud.

  • PowerNap Improvements
    PowerNap is a power management tool, created by Dustin Kirkland, that has been integrated with the Ubuntu Enterprise Cloud. However, this sessions we will discuss how to extend the functionality of PowerNap to make it available for other kinds of environments, as well as providing alternative methods of power savings for Servers.

    The outcome of the discussion is:

    • Investigate how PowerNap could tap into Upstart to monitor processes in an event driven manner rather than polling /proc.
    • Use pm-powersave for PowerNap new power save mode.
    • Contribute any new actions to pm-utils (rather keeping in PowerNap)
    • Use event based monitoring for input polling (limited to keyboard and mouse)
    • Get network monitor matching the MAC in the WoL.
    • Provide a powerwaked to track machines registered and be able to schedule poweroff’s/updates.

If you would like to know more and you are not attending to UDS personally, you can still participate remotely. Or, you can just show up at the session. I hope to see there anyone who’s interested.

UPDATE: High Availability for the Ubuntu Enterprise Cloud (UEC) – Cloud Controller (CLC)

UPDATE
So I finally had the time to write the OCF Resource Agent for the Cloud Controller as promised. It is an early Resource Agent and currently is tested for CLC’s running in Ubuntu ONLY (UEC).

But first what is an OCF Resource Agent? An OCF RA is an executable script that is used to manage a resource within a cluster. In this case, this RA is a script that will manage the resource (Cloud Controller) in a 2 node pacemaker based HA Cluster. The resource starts, stops, and monitors the service (Cloud Controller) when the Cluster Resource Manager (Pacemaker) indicates it to (This means that upstart will NOT start the CLC).

Now that we all know what are OCF RA’s, let’s test it: First download the RA from HERE and move the resource to:

wget -c http://people.ubuntu.com/~andreserl/eucaclc
sudo mkdir /usr/lib/ocf/resource.d/ubuntu
sudo mv eucaclc /usr/lib/ocf/resource.d/ubuntu/eucaclc
sudo chmod 755 /usr/lib/ocf/resource.d/ubuntu/eucaclc

Then, change the cluster configuration (sudo crm configure edit) for res_uec resource as follows:

primitive res_uec ocf:ubuntu:eucaclc op monitor interval=”20s”

And the new RA should start the Cloud Controller automatically and keep monitoring it.

NOTE: Please note that this Resource Agent is an initial draft and might be buggy. If you find any bugs or things don’t work as expected, please don’t hesitate to contact me.

At UDS-M, I raised the concern of the lack of High Availability for the Ubuntu Enterprise Cloud (UEC). As part as the Cluster Stack Blueprint, the effort of trying to bring HA to UEC was defined, however, it was barely discussed due to the lack of time, and the work on HA for the UEC has been deferred for Natty. However, in preparation for the next release cycle, I’ve been able to setup a two node HA Cluster (Master/Slave) for the Cloud Controller (CLC).

NOTE: Note that this tutorial is an early draft and might contain typos/erros that I might have not noticed. Also, this might not also work for you, that’s why I first recommend to have a UEC up and running with one CLC, and then add the second CLC. If you need help or guidance, you know where to find me :). Also note that this is only for testing purposes!, and I’ll be moving this HowTo to an Ubuntu Wiki page soon since the formatting seems to be somehow annoying :).

1. Installation Considerations
I’ll show you how to configure two UEC (eucalyptus) Cloud Controllers in High Availability (Active/Passive) , using the HA Clustering tools (Pacemaker, Heartbeat), and DRBD for replication between CLC’s. This is shown in the following image.

The setup I used is a 4 node setup, 1 CLC, 1 Walrus, 1 CC/SC, 1 NC, as it is detailed in the UEC Advanced Installation Doc, however, I installed the packages from the Ubuntu Server Installer. Now, as per the UEC Advanced Installation Doc, it is assumed that there is only one network interface (eth0) in the Cloud Controller connected to a “public network” that connects it to both, the outside world and the other components in the Cloud. However, to be able to provide HA be need the following requirements:

  • First, we need a Virtual IP (VIP) to allow both, the clients and the other Controllers to access either one of the CLC’s using that single IP. In this case, we are assuming that the “public network” is 192.168.0.0/24, and that the VIP is 192.168.0.100. This VIP will also be used to generate the new certificates.
  • Second, we need to add a second network interface to the CLC’s to use it as a replication link between DRBD. This second interface is eth1 and will have address ranged in 10.10.10.0/30.

2. Install Second Cloud Controller (CLC2)
Once you finish setting up the UEC and everything is working as expected, please install a second cloud controller.
Once installed, it is desirable to not start the services just yet. However, you will need to exchange the CLC ssh keys with both the CC and the Walrus as it is specified in SSH Key Authentication Setup, under STEP4 of the UEC Advanced Installation doc. Please note that this second CLC will also have two interfaces, eth0 and eth1. Leave eth1 unconfigured, but configure eth0 with an IP address in the same network as the other controllers.

3. Configure Second Network Interface
Once the two CLC’s are installed (CLC1 and CLC2), we need to configure eth1. This interface will be used as a direct link between CLC1 and CLC2 and will be used by DRBD as the replication link. In this example, we’ll be using 10.10.10.0/30. On your /etc/network/interfaces.

On CLC1:

auth eth1
iface eth1 inet static
address 10.10.10.1
netmask 255.255.255.252

On CLC2:

auth eth1
iface eth1 inet static
address 10.10.10.2
netmask 255.255.255.252

NOTE: Do *NOT* add the gateway because it is a direct link between CLC’s. If we add it, it will create a default route the configuration of the resources will fail further along the way.

4. Setting up DRBD

Once the CLC2 is installed and configured, we need to setup DRBD for replication between CLC’s.

4.1. Create Partitions (CLC1/CLC2)
For this, we either need a new disk or disk partition. In my case, I’ll be using /dev/vdb1. Please note that both partitions need to be exactly equal in both nodes. You can create them whichever way you prefer.

4.2. Install DRBD and load module (CLC1/CLC2)
Now we need to install DRBD Utils.

sudo apt-get install drbd

Once it is installed, we need to load the kernel module, and add it is /etc/modules. Please note that DRBD Kernel Module is now included in mainline kernel.

sudo modprobe drbd
sudo -i
echo drbd >> /etc/modules

4.3. Configuring the DRBD resource (CLC1/CLC2)
Add a new resource for DRBD by editing the following file:

sudo vim /etc/drbd.d/uec-clc.res

The configuration looks similar as the following:

resource uec-clc {
device /dev/drbd0;
disk /dev/vdb1;
meta-disk internal;
on clc1 {
address 10.10.10.1:7788;
}
on clc2 {
address 10.10.10.2:7788;
}
syncer {
rate 10M;
}
}

4.4. Creating the resource (CLC1/CLC2)
Now we need to do the following on CLC1 and CLC2:

sudo drbdadm create-md uec-clc
sudo drbdadm up uec-clc

4.5. Establishing initial communication (CLC1)
Now, we need to do the following:

sudo drbdadm -- --clear-bitmap new-current-uuid uec-clc
sudo drbdadm primary uec-clc
mkfs -t ext4 /dev/drbd0

4.6. Copying the Cloud Controller Data for DRBD Replication (CLC1)
Once the DRBD nodes are in sync, we need have the data replicated between the CLC1 and the CLC2 and make the necessary changes so that they both can access the data at a given point in time. To do this, do the following in CLC1:

sudo mkdir /mnt/uecdata
sudo mount -t ext4 /dev/drbd0 /mnt/uecdata
sudo mv /var/lib/eucalyptus/ /mnt/uecdata
sudo mv /var/lib/image-store-proxy/ /mnt/uecdata
sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus
sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy
sudo umount /mnt/uecdata

What we did here is to move the Cloud Controller data to the DRBD mount point so that it get’s replicated to the second CLC, and then do a symlink from the mountpoint to the original data folders.

4.7. Preparing the second Cloud Controller (CLC2)
Once we prepared the data in CLC1, we can discard the data in CLC2, and we need to create the symlinks the same way we did in the CLC1. We do this as follows:

sudo mkdir /mnt/uecdata
sudo rm -fr /var/lib/eucalyptus
sudo rm -fr /var/lib/image-store-proxy
sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus
sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy

After this, the data will be replicated via DRBD. Whenever CLC1.

5. Setup the Cluster

5.1. Install the Cluster Tools
First we need to install the clustering tools:

sudo apt-get install heartbeat pacemaker

5.2. Configure Heartbeat
Then we need to configure Heartbeat. First, create /etc/ha.d/ha.cf and add the following:

autojoin none
mcast eth0 239.0.0.43 649 1 0
warntime 5
deadtime 15
initdead 60
keepalive 2
node clc1
node clc2
crm respawn

Then create the authentication file (/etc/ha.d/authkeys), ad add the following:

auth1
1 md5 password

and change the permissions:

sudo chmod 600 /etc/ha.d/authkeys

5.3. Removing Startup of services at boot up
We need to let the Cluster manage the resources, instead of starting them at bootup.

sudo update-rc.d -f eucalyptus remove
sudo update-rc.d -f eucalyptus-cloud remove
sudo update-rc.d -f eucalyptus-network remove
sudo update-rc.d -f image-store-proxy remove

And we also need to change the “start on” to “stop on” in the upstart configuration scripts at /etc/init/* for:

eucalyptus.conf
eucalyptus-cloud.conf
eucalyptus-network.conf

5.4. Configuring the resources
Then, we need to configure the cluster resources. For this do the following:

sudo crm configure

and paste the following:

primitive res_fs_clc ocf:heartbeat:Filesystem params device=/dev/drbd/by-res/uec-clc directory=/mnt/uecdata fstype=ext4 options=noatime
primitive res_ip_clc ocf:heartbeat:IPaddr2 params ip=192.168.0.100 cidr_netmask=24 nic=eth0
primitive res_ip_clc_src ocf:heartbeat:IPsrcaddr params ipaddress="192.168.0.100"
primitive res_uec upstart:eucalyptus  op start timeout=120s op stop timeout=120s op monitor interval=30s
primitive res_uec_image_store_proxy lsb:image-store-proxy
group rg_uec res_fs_clc res_ip_clc res_ip_clc_src res_uec res_uec_image_store_proxy
primitive res_drbd_uec-clc ocf:linbit:drbd params drbd_resource=uec-clc
ms ms_drbd_uec res_drbd_uec-clc meta notify=true
order o_drbd_before_uec inf: ms_drbd_uec:promote rg_uec:start
colocation c_uec_on_drbd inf: rg_uec ms_drbd_uec:Master
property stonith-enabled=False
property no-quorum-policy=ignore

6. Specify the Cloud IP for the CC, NC, and in the CLC.
Once you finish the configuration above, one of the CLC’s will be the Active one and the Second will the passive one. The Cluster Resource Manager will decide which one will become the primary one. However, it is expected that CLC1 will become the primary.

Now, as specified in the UEC Advanced Installation Doc, we need to specify the Cloud Controller VIP in the CC. However it is also important to do it in the NC. This is done in /etc/eucalyptus/eucalyptus.conf by adding:

VNET_CLOUDIP="192.168.0.100"

Then, log into the Web Front end (192.168.0.100:8443), and change the Cloud Configuration to have the VIP as the Cloud Host.

By doing this you will have the new certificates generated with the VIP, that will allow you to connect to the cloud even if the primary Cloud Controller failed, and the Second one tool control of the service.

Finally, restart the Walrus, CC/SC, and NC and enjoy.

7. Final Thoughts
The cluster resource manager is using the upstart script to manage the Cloud Controller. However, this is not optimal, and it is used for testing purposes. The creation of an OCF Resource Agent will be required to adequately start/stop and monitor eucalyptus. The OCF RA will be developed soon, and this will be discussed at Ubuntu Developer Summit – Natty.

Installing Zenoss on Ubuntu 10.04

Zenoss (Network Monitoring and IT Management tool) has been running with Python2.4 for quite a while now, making it impossible to be run in a few Ubuntu releases. However, this has now changed. Few days ago they announced the migration of Zenoss to Python 2.6. Please see the announcement here.

If you wish to help testing, you can refer to this Ubuntu Installation Notes, or do the following:

First we need to install the necessary dependencies:

$ sudo apt-get install rsync python-dev build-essential make bzip2 sudo sysv-rc-conf snmpd swig autoconf mysql-server-5.0 libmysqlclient15-dev libmysqlclient15off ttf-liberation ttf-linux-libertine unzip subversion librrd4

Second, we need to create the zenoss user, and create the destination path with the right permissions:

$ sudo adduser zenoss
$ sudo mkdir /usr/local/zenoss
$ sudo chown zenoss /usr/local/zenoss

Third, we need to configure the environment for zenoss user. So, we need first to log in as the zenoss user:

$ sudo -i -u zenoss

Then add the following to .bashrc

export ZENHOME=/usr/local/zenoss
export PYTHONPATH=$ZENHOME/lib/python
export PATH=$ZENHOME/bin:$PATH

And reload .bashrc
$ source .bashrc

Fourth, we obtain the trunk while logged into the zenoss user:

$ svn co http://dev.zenoss.org/svn/trunk/inst zenossinst

Finally, we install zenoss.

$ cd zenossinst
$ ./install.sh

Ubuntizing People!!

About two months ago I made a post where I mentioned how I introduced some friends into Ubuntu… They showed their desired to see what it was and how it worked. The post is here: The Need to Ubuntize people.

Anyways, the thing is that before going to the UDS I showed one of my friends Ubuntu working with full Compiz Effects and he was just amazed. He told me it just looks like a Mac!! I told him… well they are similar and explained to him the whole concept behind Unix and Linux again. Back from the UDS, I brought back some CDs and the Ubuntu User Magazine. I gave a CD and the Magazine to him (with some stickers) and he was just excited about it. The thing is that today he told me: “I just installed Ubuntu and I love it!!.” First of all I though, “wow, he just did it by himself without requiring my help!!”… and well, he did it using Wubi… but, that doesn’t not matter… he gave first step by jumping into it by himself!!

Anyways, after he told me that he was using Ubuntu for quite a few hours, he was setting up all he needed. He got Empathy and Evolution with his accounts working without any help. However, he couldn’t set up three things: Skype, Flash Player, and Music!! I helped him out and showed him how easy it was to install everything…

First, I told him to use the Ubuntu Software Center to install the Adobe Flash Player, but he couldn’t find it there… I browse it by myself and It was there indeed… but the thing was that he just couldn’t find it. For me, this means that there are still things to improve in the Ubuntu Software Center to make it easier for people to install new software.

Second, He told me that he was having difficulty playing his music. I just showed him how to install the codecs by clicking the song and leaving the Player to find it by itself and that’s it. I also showed him how to install Skype.

Anyways, after resolving all the issues he just kept telling me how amazed he was with Ubuntu. He told me, “it is so much faster than Windows, it is so straight forward and easy to use, and I just love it”. He loved it so much that he is still using it in this same exact moment, and its been more than 12 hours!! He also kept telling me how much faster Karmic is in his old laptop in comparison to Windows!! And well… I’m just happy he enjoys it as much as I do.

In conclusion… Ubuntu ROCKS!! I’m very amazed that he did pretty much everything he needed without any help and that he loves it. He is just happy that he does not have the same problems he has with windows in the same time frame. Anyways, I guess that this is all thanks to the developers who put so much effort in the Upstream Projects, and to all of those who make Ubuntu Rock!!

Oh… btw… I almost forget… he said “Ubuntu is the future…”, “Windows is so gonna loose against Ubuntu…”, so… if a regular Windows user thinks this… I’m pretty sure people is just getting sicker and sicker of Windows and they are desperately looking for something new… and unfortunately for us… some of them are switching to Mac because they do not know anything else… so this is where we should show them that there is a whole world besides just Windows and Mac… and this world is the Linux world, and from mine opinion… I would drive them into Ubuntu!!

Quickly Rocks!

So… I gave it a try… and I liked it. The last few weeks I was programming (after a long time) for a project of one of my classes. Since I wanted to learn python for a while, I decided to do my project in Python. At first, it was just going to be a command line application, but… after giving it a second thought, I decided to provide a GUI too using pygtk.

So, since there I saw lots of post about quickly, I decided to give quickly a try too… and it is awesome! I really like it.

Anyways, my app is a simple app that’s allowing me to learn python. It enciphers a text file using a public key, and then it hides the message in an image. I’m making use of python-gnupginterface and python-stepic for this. As you can see, it sounds like a simple app, and it really is… I’ll publish it when I’m done.

Btw… give a try to quickly, you are gonna like it. Thanks rickspencer and didrocks for this awesome tool.