Proj-2012-2013-OAR-Cloud

=OAR Cloud Project=

This project aims at creating a light cloud computing architecture on top of the batch scheduler OAR. You can access to the Git repository on github.

=Project Members= This project is proposed by:

Olivier Richard - Teacher and researcher in RICM's Polytech Grenoble training

Three students from RICM are working on it:


 * Jordan Calvi (RICM4)
 * Alexandre Maurice (RICM4)
 * Michael Mercier (RICM5)

=Conception=

Context
There is two kind of actors that are dealing with OAR cloud, users and administrators. The F.u* and the F.a* are the user and Administrator features describe below.



User
Main features:

F.u.0 Connect to an account

F.u.1 Launch and configure one or more instances

F.u.2 Deploy an image on one or more instances

F.u.3 Modify and save images

F.u.4 Setup alarms based on rules using metrics

F.u.5 Being inform by e-mail and/or notification for interesting events

Advanced features:

F.u.6 Automated resize of an instance (adapt the resources) using predefined rules and schedule

F.u.7 Load balancing between several instances

F.u.8 Advanced Network configuration for user: ACL, subnets, VPN...

Administrator
F.a.0 Create/delete user account

F.a.1 Add/remove and manage resources

F.a.2 Visualize resources and instances states

F.a.3 Install and update nodes operating systems

F.a.4 Handle users access rights

F.a.5 Setup alarms based on rules using metrics

F.a.6 Being inform by e-mail and/or notification for interesting events

Logical View
Here is the logical view of the OAR Cloud system. Every component on this diagram represents a software component type. The links between these components represent the communication between them.



Description of the main components:
 * AccountManager : Handle users and admins access rights
 * AccessPoint : The system access point reached by the different access tools
 * InstanceManager : Manage the creation, configuration and deletion of instances all over the severals nodes. It also handles the appliances persistence and deployment
 * UserCLI & AdminCLI : Command line access tools for users and admins

=Milestones= This table presents the milestones of the project. Each Milestones are described below.

M1
In an Ubuntu 12.04 LTS environement
 * 1) install and configure OAR
 * 2) install and configure LXC
 * 3) make OAR reservation
 * 4) launch one or more VM using LXC
 * 5) Connect to the VM
 * 6) check if killing the job do kill the VM
 * 7) script this!

The cigri devel appliance was used as an configuration example for this. The Ubuntu 12.04 LTS distribution has been chosen because it seems to be one of the few distributions where LXC works out-of-the-box.

OAR settings

 * the job manager "job_resource_manager_cgroups.pl" generate cpuset errors
 * the job manager "job_resource_manager.pl" generate cpuset errors too
 * I thought the problem come from a database conflict so I tried to use
 * I tried to run the `update_cpuset_id.sh` script but it shows an error message either:

The problem comes from the cgroup-lite service that run by default in an Ubuntu 12.04. Stop this service using service cgroup-lite stop solve the problem for OAR but puts LXC down.

I find a trick to make OAR and LXC working together: I disable the cpuset feature of OAR. In the /etc/oar/oar.conf (there is a copy in the M1 folder) I have comment CPUSET_PATH and set to yes OARSUB_FORCE_JOB_KEY as it is provided in the CPUSET_PATH comment.

Thus, I could run an LXC container inside a job. The container was vanished when the job has been killed.

Questions

 * Is the OAR cpuset mandatory, even if the LXC manage it?

M2
In Ubuntu 12.04 LTS
 * 1) install and configure LXC, libvirt and OpenVswitch
 * 2) launch at least 2 VMs
 * 3) make the VMs to ping each others
 * 4) script this!

M3
TODO

=Tools=

LXC
LXC is a lightweight hypervisor allowing to run isolated appliances. Indeed, it provides a virtual environment that has its own process and network space. It is similar to a chroot. As LXC is implemented on given linux kernel, only operating systems that are compatible with the hosting kernel will be able to run. It is based on cgroups (control groups), a Linux kernel feature to manage ressources like CPU, memory and disk I/O by limiting resources, prioritizing groups, accounting (measuring), isolating (separate namespaces for groups, it means processes, network connections and files are not visible by other groups) and controling groups.

Installation
/!\ LXC as been set up succesfully on ubuntu 12.04 LTS as container launching does not works on Debian Wheeze testing OS. /!\

Packages installation
 * /?\ Conteners will be placed in /var/lib/lxc /?\
 * /?\ Conteners will be placed in /var/lib/lxc /?\

''Mounting cgroups automatically : edit /etc/fstab and add the following

''Enabling previous modifications

''Checking everything is ok

Manipulation of containers
''Creating a container running Ubuntu
 * /!\ By default, the version of the guest OS is the same as the hosting one. /!\
 * /!\ By default, the version of the guest OS is the same as the hosting one. /!\

''Showing existing containers and thoses that are running
 * /?\ The first line indicates existing containers and the second one thoses in running state. /?\
 * /?\ The first line indicates existing containers and the second one thoses in running state. /?\

''Obtaining information about ubuntu1

''Starting the container

''Connection to the container

''Shutting down the container

''Exiting console
 * perform

Deleting the container

Configuring the container
At boot time, a virtual machines reads the file /var/lib/lxc/{VM-name}/config to set up its configuration (root file system, number of TTY, limites, etc).  lxc.network.type=veth lxc.network.link=lxcbr0 lxc.network.flags=up lxc.network.hwaddr = 00:16:3e:24:e5:9a lxc.utsname = ubuntu1

lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.rootfs = /var/lib/lxc/ubuntu1/rootfs lxc.mount = /var/lib/lxc/ubuntu1/fstab lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin lxc.pivotdir = lxc_putold


 * 1) uncomment the next line to run the container unconfined:
 * 2) lxc.aa_profile = unconfined

lxc.cgroup.devices.deny = a lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm lxc.cgroup.devices.allow = c 254:0 rwm lxc.cgroup.devices.allow = c 10:229 rwm lxc.cgroup.devices.allow = c 10:200 rwm lxc.cgroup.devices.allow = c 1:7 rwm lxc.cgroup.devices.allow = c 10:228 rwm lxc.cgroup.devices.allow = c 10:232 rwm 
 * 1) Allow any mknod (but not using the node)
 * 1) /dev/null and zero
 * 1) consoles
 * 1) lxc.cgroup.devices.allow = c 4:0 rwm
 * 2) lxc.cgroup.devices.allow = c 4:1 rwm
 * 3) /dev/{,u}random
 * 1) rtc
 * 1) fuse
 * tun
 * 1) full
 * 1) hpet
 * kvm

Configuring default network and switch
see: /etc/default/lxc

Installation
Packages installation

Creating an XML file configuration to import an existing container in libvirt
 * /!\ Notice that libvirt can not install am OS in a container. Therefore, an LXC container with an OS must have been set up previously (that was the we saw before). Then, the file system directory will be given to libvirt when importing the VM. /!\

  exe /sbin/init  1   destroy restart destroy /usr/lib/libvirt/libvirt_lxc 
 * In order to create a libvirt container, an XML file describing the VM we want to import must be filled. There is a sample of such an XML file that belong to VM "ubuntu1" we have just created :

Booting the container

Connecting to the container localy

Connecting to the container remotly

Shutting the container 

Deleting the container

Problemes

 * Using Ubuntu as a host,when connecting to a libvirt VM running Debian, the guest appliance waits for the user to log in through two interfaces at a time (tty1 and console), so it is not possible to get identified.


 * Using Debian Wheeze as a host, when connecting to a libvirt VM, the console does not offer the user the possibility to log in. However, when using directly LXC there is no issue.

Internal links

 * UML

19/02

 * milestone definitions

04/02

 * We have specified the subject
 * Distribute the work between us.
 * Jordan: LXC and Libvirt
 * Alexandre: OpenVSwitch and Libvirt
 * Michael: OAR and global architecture