MAAS Development Summary – Week July 3rd – 14th

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1 .

MAAS 2.2.1 is available in:

  • ppa:maas/stable
Posted in MAAS, Planet, Ubuntu | Leave a comment

MAAS Development Summary – June 19th – 23rd

Announcements

  • Transition to Git in Launchpad
    The MAAS team is happy to announce that we have moved our code repositories away from Bazaar. We are now using Git in Launchpad.[1]

MAAS 2.3 (current development release)

This week, the team has worked on the following features and improvements:

  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress involved:
    • Updated Jenkins job configuration to run CI tests from Git instead of bzr.
    • Created new Jenkins jobs to test older releases via Git instead of bzr.
    • Update Jenkins job triggering mechanism from using Tarmac to using the Jenkins Git plugin.
    • Replaced the maas code lander (based on tarmac) with a Jenkins job to automatically land approved branches.
      • This also includes a mechanism to automatically set milestones and close Launchpad bugs.
    • Updated Snap building recipe to build from Git. 
  • Removal of ‘tgt’ as a dependency behind a feature flag – This week we have landed the ability to load ephemeral images via HTTP from the initrd, instead of doing it via iSCSI (served by ‘tgt’). While the use of ‘tgt’ is still default, the ability to not use it is hidden behind a feature flag (http_boot). This is only available in trunk. 
  • Django 1.11 transition – We are down to the latest items of the transition, and we are targeting it to be completed by the upcoming week. 
  • Network Beaconing & better network discovery – The team is continuing to make progress on beacons. Following a thorough review, the beaconing packet format has been optimized; beacon packets are now simpler and more compact. We are targeting rack registration improvements for next week, so that newly-registered rack controllers do not create new fabrics if an interface can be determined to be on an existing fabric.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1). The MAAS team is currently targeting a new 2.2.1 release for the upcoming week.

  • LP #1687305 – Fix virsh pods reporting wrong storage
  • LP #1699479 – A couple of unstable tests failing when using IPv6 in LXC containers

[1]: https://git.launchpad.net/maas

Posted in MAAS, Planet, Ubuntu | Leave a comment

MAAS Development Summary – June 12th – 16th

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page
Posted in MAAS, Planet | Leave a comment

MAAS Development Summary – June 8th, 2017

Thursday June 8th, 2017

The MAAS team is happy to announce the introduction of development summaries. We hope this helps to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS, and bugs fixed in released MAAS versions.

Announcements

With the MAAS 2.2 release out of the door, we are happy to announce that:

  • MAAS 2.3 is now opened for development.
  • MAAS is moving to GIT in Launchpad – In the coming weeks, MAAS source will now be hosted under a GIT repository in Launchpad, once we complete the work of updating all our internal processes (e.g. CI, Landers, etc).

MAAS 2.3 (current development release)

With the team now focusing efforts on the new development release, MAAS 2.3, the team has been working on the following features and improvements:

  • Started adding support for Django 1.11 – MAAS will continue to be backward compatible with Django 1.8.
  • Adding support for ‘upstream’ proxy – MAAS deployed machines will continue to use MAAS’ internal proxy, while allowing MAAS ‘ proxy to communicate with an upstream proxy.
  • Started adding network beaconing – New feature to support better network (subnet’s, vlans) discovery and allow fabric deduplication.
    • Officially registered IPv4 and IPv6 multicast groups for MAAS beaconing (224.0.0.118 and ff02::15a, respectively).
    • Implemented a mechanism to provide authenticated encryption using the MAAS shared secret.
    • Prototyped initial beaconing multicast join mechanism and receive path.

Libmaas (python-libmaas)

With the continuous improvement of the new MAAS Python Library (python-libmaas), we have focused our efforts on the following improvements the past week:

  • Add support to be able to provide nested objects and object sets.
  • Add support to be able to update any object accessible via the library.
  • Add ability to read interfaces (nested) under Machines, Devices, Rack Controllers and Region Controllers.
  • Add ability to read VLAN’s (nested) under Fabrics.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • Bug #1694767: RSD composition not setting local disk tags
  • Bug #1694759: RSD Pod refresh shows ComposedNodeState is “Failed”
  • Bug #1695083: Improve NTP IP address selection for MAAS DHCP clients.

Questions?

IRC – Find as on #maas @ freenode.

ML – https://lists.ubuntu.com/mailman/listinfo/maas-devel

Posted in MAAS, Planet, Ubuntu | Tagged | Leave a comment

MAAS 2.2.0 Released!

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

Availability
MAAS 2.2.0 is currently available in the following MAAS team PPA.
ppa:maas/next
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.
Posted in canonical, Planet, Ubuntu | Leave a comment

So… Where’s Chuck this past Weekend?

Filming Fast & Furious 7…

Note: The car is what we call a Combi in Peru, which is a form of public transportation. While I didn’t create the FF7 original pic, it is mock to peruvian combi drivers because those are one of the most reckless drivers in the world.

Posted in Planet, Ubuntu | Leave a comment

Getting Started with MAAS and Juju: MAAS Overview

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

Posted in canonical, Planet, Ubuntu | Tagged , , , | Leave a comment

Arequipa – Peru

I just wanted to share a few pics I was sent of the Volcano & my hometown… enjoy!

Posted in Planet, Ubuntu | Tagged , | Leave a comment

PowerNap Improvements session at UDS-O

After the success of the improvements of PowerNap in Ubuntu Natty 11.04, we will be having another session st UDS-O Thursday the 12th at 15:00. In this session we will discuss the following:

  • Second Stage action when running in PowerSave mode.
  • Support for port-ranges in Network Monitors
  • Changing the polling monitoring system to an event based system.
  • Client/Server approach to monitor/manage PowerNap “client machines” over the network for data center wide deployments
  • Server ARP network Monitoring for Automatic Wake-up of Clients.
  • API like approach for Integration with other projects.

Everyone who’s interested are more than welcome to join! For more information, the blueprint can be found HERE.

Posted in Planet, Ubuntu | Tagged , , , | Leave a comment

HA Cluster Stack Session at UDS-O

Thursday the 12th at noon we will be having the HA Cluster Stack session. In the session we will discuss the following:

  • Discuss the adoption of new upstream releases of the HA Cluster Stack to include in Oneiric in preparation for the next Ubuntu LTS release.
  • Finish up work items from previous sessions (mainly documentation).
  • Gather feature requests and discuss the creation of meta-packages.
  • And, if the time allows us, I’d like to follow up with HA for OpenStack as they had a session in their Design Summit about it.

If you are interested of the Future of HA Clustering in Ubuntu, you are more than welcome to join this session. For more information the blueprint can be found HERE.

Posted in Planet, Ubuntu | Tagged , , | Leave a comment