https://air.imag.fr/api.php?action=feedcontributions&user=Romain.Badamo-Barthelemy&feedformat=atomair - User contributions [en]2024-03-29T13:55:33ZUser contributionsMediaWiki 1.35.13https://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker-SRS&diff=27880Projets-2015-2016-IaaS Docker-SRS2016-03-14T10:07:55Z<p>Romain.Badamo-Barthelemy: /* Availability */</p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| February 2016<br />
| Romain Barthelemy, Alan Damotte, Robin Eudes, Kai Guo, Malek Mammar<br />
| Collaborative IaaS using Docker<br />
| Pierre-Yves Gibello<br />
| February 2016<br />
<br />
|}<br />
=Introduction=<br />
<br />
==Purpose of the requirements document==<br />
<br />
:*This Software Requirements Specification (SRS) identifies the requirements for the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project .<br />
:*This document is a guideline about the functionalities offered and the problems that the system solves.<br />
<br />
==Scope of the product==<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
==Definitions, acronyms and abbreviations==<br />
<br />
==References==<br />
<br />
:*The main page of the project: [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]]<br\><br />
<br />
==Overview of the remainder of the document==<br />
<br />
:The rest of the SRS examines the specifications of the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project in details. Section two of the SRS presents the general factors that affect the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project and its requirements, such as user characteristics and project constraints. Section three outlines the detailed, specific and functional requirements, performance, system and other related requirements of the project. Supporting information about appendices is provided in Section three.<br />
<br />
=General description=<br />
<br />
==Product perspective==<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
==Product functions==<br />
<br />
:*Client :<br />
:** Register to the service<br />
:** Ask for an instance precising required resources<br />
:** Launch an instance<br />
:** Connect to running instance<br />
:** Stop an instance<br />
:** Remove an instance<br />
:** Give a mark to a provider<br />
<br />
:*Provider :<br />
:** Register to the service<br />
:** Download required files<br />
:** Launch coordinator system<br />
:** Provide some of his resources<br />
:** Start providing<br />
:** Stop providing<br />
:** See resources consumption of the instances running on his machine<br />
<br />
== User characteristics==<br />
:* The client/provider doesn't have to be familiar with programming<br />
:* The client/provider should know unix basics<br />
:* The client/provider should know how ssh works<br />
:* The provider should know how to launch a script in a terminal<br />
<br />
==General constraints==<br />
:*System constraint:<br />
:** The provider's machine must run on a unix system (Ubuntu for example)<br />
:** The provider must have Docker installed<br />
<br />
:*Environment constraint:<br />
:** Internet access is required to use the service<br />
<br />
==Assumptions and dependencies==<br />
:* The client/provider have an internet access<br />
<br />
=Specific requirements, covering functional, non-functional and interface requirements=<br />
<br />
==Functional requirements==<br />
:* The system must allow user to create their profile<br />
:* The system must allow client to ask for an instance precising required resources<br />
:* The system must allow client to launch an instance<br />
:* The system must allow client to connect to running instance<br />
:* The system must allow client to stop an instance<br />
:* The system must allow client to remove an instance<br />
:* The system should allow client to give a mark to a provider<br />
<br />
<br />
:* The system must allow client to download required files<br />
:* The system must allow client to launch coordinator system<br />
:* The system must allow client to provide some of his resources<br />
:* The system must allow client to start providing<br />
:* The system must allow client to stop providing<br />
:* The system should allow client to see resources consumption of the instances running on his machine<br />
<br />
==Performance requirements==<br />
<br />
==Design constraints==<br />
<br />
==Logical database requirement==<br />
[[File:Database.jpg|center|thumb|1000px|Database diagram]]<br />
<br />
==Software System attributes==<br />
<br />
===Reliability===<br />
<br />
The system must deliver correct informations all the time, so that : <br />
* Clients could only connect to their instances<br />
* Clients could know status of their instances<br />
* Providers could know status of their machine<br />
<br />
===Availability===<br />
<br />
The system should be available 24h/24, 7days/7 since both providers and clients should be able to use it. However, the servers may be down, but it must remain available 95% of the time.<br />
<br />
===Security===<br />
<br />
The security of the service must be optimal, clients should not be able to access information on instances of other clients. Furthermore, providers should also not be able to access container's information. The database modifications must be restricted too.<br />
<br />
===Maintainability===<br />
<br />
Updates have to be easy to do, in order to add new functionalities, improve the service easily.<br />
<br />
===Portability===<br />
:For the moment, the system will be available in Linux only for provider side<br />
:However, if packages are available on other systems, we might release the system on other OS later.<br />
<br />
==Other requirements==<br />
:*The system must be able to run on Linux 14 or higher<br />
:*The system must not consume too much CPU<br />
:*The system must not consume too much Memory<br />
<br />
=Product evolution=<br />
<br />
=Appendices=<br />
<br />
==Specification ==<br />
* The global project's page can be found [[Projets-2015-2016-IaaS Docker | here]].<br />
<br />
==Licensing Requirements==<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker-SRS&diff=27879Projets-2015-2016-IaaS Docker-SRS2016-03-14T10:01:10Z<p>Romain.Badamo-Barthelemy: /* Security */</p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| February 2016<br />
| Romain Barthelemy, Alan Damotte, Robin Eudes, Kai Guo, Malek Mammar<br />
| Collaborative IaaS using Docker<br />
| Pierre-Yves Gibello<br />
| February 2016<br />
<br />
|}<br />
=Introduction=<br />
<br />
==Purpose of the requirements document==<br />
<br />
:*This Software Requirements Specification (SRS) identifies the requirements for the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project .<br />
:*This document is a guideline about the functionalities offered and the problems that the system solves.<br />
<br />
==Scope of the product==<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
==Definitions, acronyms and abbreviations==<br />
<br />
==References==<br />
<br />
:*The main page of the project: [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]]<br\><br />
<br />
==Overview of the remainder of the document==<br />
<br />
:The rest of the SRS examines the specifications of the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project in details. Section two of the SRS presents the general factors that affect the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project and its requirements, such as user characteristics and project constraints. Section three outlines the detailed, specific and functional requirements, performance, system and other related requirements of the project. Supporting information about appendices is provided in Section three.<br />
<br />
=General description=<br />
<br />
==Product perspective==<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
==Product functions==<br />
<br />
:*Client :<br />
:** Register to the service<br />
:** Ask for an instance precising required resources<br />
:** Launch an instance<br />
:** Connect to running instance<br />
:** Stop an instance<br />
:** Remove an instance<br />
:** Give a mark to a provider<br />
<br />
:*Provider :<br />
:** Register to the service<br />
:** Download required files<br />
:** Launch coordinator system<br />
:** Provide some of his resources<br />
:** Start providing<br />
:** Stop providing<br />
:** See resources consumption of the instances running on his machine<br />
<br />
== User characteristics==<br />
:* The client/provider doesn't have to be familiar with programming<br />
:* The client/provider should know unix basics<br />
:* The client/provider should know how ssh works<br />
:* The provider should know how to launch a script in a terminal<br />
<br />
==General constraints==<br />
:*System constraint:<br />
:** The provider's machine must run on a unix system (Ubuntu for example)<br />
:** The provider must have Docker installed<br />
<br />
:*Environment constraint:<br />
:** Internet access is required to use the service<br />
<br />
==Assumptions and dependencies==<br />
:* The client/provider have an internet access<br />
<br />
=Specific requirements, covering functional, non-functional and interface requirements=<br />
<br />
==Functional requirements==<br />
:* The system must allow user to create their profile<br />
:* The system must allow client to ask for an instance precising required resources<br />
:* The system must allow client to launch an instance<br />
:* The system must allow client to connect to running instance<br />
:* The system must allow client to stop an instance<br />
:* The system must allow client to remove an instance<br />
:* The system should allow client to give a mark to a provider<br />
<br />
<br />
:* The system must allow client to download required files<br />
:* The system must allow client to launch coordinator system<br />
:* The system must allow client to provide some of his resources<br />
:* The system must allow client to start providing<br />
:* The system must allow client to stop providing<br />
:* The system should allow client to see resources consumption of the instances running on his machine<br />
<br />
==Performance requirements==<br />
<br />
==Design constraints==<br />
<br />
==Logical database requirement==<br />
[[File:Database.jpg|center|thumb|1000px|Database diagram]]<br />
<br />
==Software System attributes==<br />
<br />
===Reliability===<br />
<br />
The system must deliver correct informations all the time, so that : <br />
* Clients could only connect to their instances<br />
* Clients could know status of their instances<br />
* Providers could know status of their machine<br />
<br />
===Availability===<br />
<br />
The system must be available 24h/24, 7days/7 since both providers and clients should be able to use it.<br />
<br />
===Security===<br />
<br />
The security of the service must be optimal, clients should not be able to access information on instances of other clients. Furthermore, providers should also not be able to access container's information. The database modifications must be restricted too.<br />
<br />
===Maintainability===<br />
<br />
Updates have to be easy to do, in order to add new functionalities, improve the service easily.<br />
<br />
===Portability===<br />
:For the moment, the system will be available in Linux only for provider side<br />
:However, if packages are available on other systems, we might release the system on other OS later.<br />
<br />
==Other requirements==<br />
:*The system must be able to run on Linux 14 or higher<br />
:*The system must not consume too much CPU<br />
:*The system must not consume too much Memory<br />
<br />
=Product evolution=<br />
<br />
=Appendices=<br />
<br />
==Specification ==<br />
* The global project's page can be found [[Projets-2015-2016-IaaS Docker | here]].<br />
<br />
==Licensing Requirements==<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27878Projets-2015-2016-IaaS Docker2016-03-14T09:55:08Z<p>Romain.Badamo-Barthelemy: /* Resources management */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|500px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes its port 22000 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allows us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set a memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use too much space disk of the provider. To do so, we implemented a watchdog that checks every 30 seconds the disk usage of each container and stops them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper and to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27877Projets-2015-2016-IaaS Docker2016-03-14T09:51:33Z<p>Romain.Badamo-Barthelemy: /* Resources management */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|500px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes its port 22000 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allows us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set a memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use too much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27873Projets-2015-2016-IaaS Docker2016-03-14T09:50:16Z<p>Romain.Badamo-Barthelemy: /* Resources management */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes its port 22000 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allows us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set a memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27871Projets-2015-2016-IaaS Docker2016-03-14T09:49:09Z<p>Romain.Badamo-Barthelemy: /* Resources management */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes its port 22000 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allows us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27870Projets-2015-2016-IaaS Docker2016-03-14T09:48:34Z<p>Romain.Badamo-Barthelemy: /* Build and run */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes its port 22000 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27869Projets-2015-2016-IaaS Docker2016-03-14T09:47:49Z<p>Romain.Badamo-Barthelemy: /* Build and run */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (cAdvisor) images. To do so we use Dockerfile that allows us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27867Projets-2015-2016-IaaS Docker2016-03-14T09:46:30Z<p>Romain.Badamo-Barthelemy: /* Build and run */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Test complete loop:<br />
** Create profile<br />
** Set required information (ssh public key)<br />
** As a provider, give settings of the provided machine<br />
** As a client, ask for an instance<br />
** As a client connect to the instance (ssh)<br />
** Check that Rabbitmq is correctly tracing back information about containers/instances <br />
* Add a rating system which will be used to give a mark to providers.<br />
* Start preparing the presentation<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task needs to be done from the host. That's why the first step is to create a new user on provider machine that we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker-SRS&diff=27866Projets-2015-2016-IaaS Docker-SRS2016-03-14T09:42:08Z<p>Romain.Badamo-Barthelemy: /* Logical database requirement */</p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| February 2016<br />
| Romain Barthelemy, Alan Damotte, Robin Eudes, Kai Guo, Malek Mammar<br />
| Collaborative IaaS using Docker<br />
| Pierre-Yves Gibello<br />
| February 2016<br />
<br />
|}<br />
=Introduction=<br />
<br />
==Purpose of the requirements document==<br />
<br />
:*This Software Requirements Specification (SRS) identifies the requirements for the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project .<br />
:*This document is a guideline about the functionalities offered and the problems that the system solves.<br />
<br />
==Scope of the product==<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
==Definitions, acronyms and abbreviations==<br />
<br />
==References==<br />
<br />
:*The main page of the project: [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]]<br\><br />
<br />
==Overview of the remainder of the document==<br />
<br />
:The rest of the SRS examines the specifications of the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project in details. Section two of the SRS presents the general factors that affect the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project and its requirements, such as user characteristics and project constraints. Section three outlines the detailed, specific and functional requirements, performance, system and other related requirements of the project. Supporting information about appendices is provided in Section three.<br />
<br />
=General description=<br />
<br />
==Product perspective==<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
==Product functions==<br />
<br />
:*Client :<br />
:** Register to the service<br />
:** Ask for an instance precising required resources<br />
:** Launch an instance<br />
:** Connect to running instance<br />
:** Stop an instance<br />
:** Remove an instance<br />
:** Give a mark to a provider<br />
<br />
:*Provider :<br />
:** Register to the service<br />
:** Download required files<br />
:** Launch coordinator system<br />
:** Provide some of his resources<br />
:** Start providing<br />
:** Stop providing<br />
:** See resources consumption of the instances running on his machine<br />
<br />
== User characteristics==<br />
:* The client/provider doesn't have to be familiar with programming<br />
:* The client/provider should know unix basics<br />
:* The client/provider should know how ssh works<br />
:* The provider should know how to launch a script in a terminal<br />
<br />
==General constraints==<br />
:*System constraint:<br />
:** The provider's machine must run on a unix system (Ubuntu for example)<br />
:** The provider must have Docker installed<br />
<br />
:*Environment constraint:<br />
:** Internet access is required to use the service<br />
<br />
==Assumptions and dependencies==<br />
:* The client/provider have an internet access<br />
<br />
=Specific requirements, covering functional, non-functional and interface requirements=<br />
<br />
==Functional requirements==<br />
:* The system must allow user to create their profile<br />
:* The system must allow client to ask for an instance precising required resources<br />
:* The system must allow client to launch an instance<br />
:* The system must allow client to connect to running instance<br />
:* The system must allow client to stop an instance<br />
:* The system must allow client to remove an instance<br />
:* The system should allow client to give a mark to a provider<br />
<br />
<br />
:* The system must allow client to download required files<br />
:* The system must allow client to launch coordinator system<br />
:* The system must allow client to provide some of his resources<br />
:* The system must allow client to start providing<br />
:* The system must allow client to stop providing<br />
:* The system should allow client to see resources consumption of the instances running on his machine<br />
<br />
==Performance requirements==<br />
<br />
==Design constraints==<br />
<br />
==Logical database requirement==<br />
[[File:Database.jpg|center|thumb|1000px|Database diagram]]<br />
<br />
==Software System attributes==<br />
<br />
===Reliability===<br />
<br />
The system must deliver correct informations all the time, so that : <br />
* Clients could only connect to their instances<br />
* Clients could know status of their instances<br />
* Providers could know status of their machine<br />
<br />
===Availability===<br />
<br />
The system must be available 24h/24, 7days/7 since both providers and clients should be able to use it.<br />
<br />
===Security===<br />
<br />
The security of the service must be optimal, clients should not be able to access information on instances of other clients. Furthermore, providers should also not be able to access container's information.<br />
<br />
===Maintainability===<br />
<br />
Updates have to be easy to do, in order to add new functionalities, improve the service easily.<br />
<br />
===Portability===<br />
:For the moment, the system will be available in Linux only for provider side<br />
:However, if packages are available on other systems, we might release the system on other OS later.<br />
<br />
==Other requirements==<br />
:*The system must be able to run on Linux 14 or higher<br />
:*The system must not consume too much CPU<br />
:*The system must not consume too much Memory<br />
<br />
=Product evolution=<br />
<br />
=Appendices=<br />
<br />
==Specification ==<br />
* The global project's page can be found [[Projets-2015-2016-IaaS Docker | here]].<br />
<br />
==Licensing Requirements==<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database.jpg&diff=27865File:Database.jpg2016-03-14T09:41:30Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database.jpg&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database.jpg&diff=27864File:Database.jpg2016-03-14T09:40:10Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27863File:Database diagram.png2016-03-14T09:39:00Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27862File:Database diagram.png2016-03-14T09:37:15Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27861File:Database diagram.png2016-03-14T09:36:55Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;: Reverted to version as of 09:32, 14 March 2016</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27860File:Database diagram.png2016-03-14T09:36:23Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;: Reverted to version as of 09:32, 14 March 2016</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27859File:Database diagram.png2016-03-14T09:35:34Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27858File:Database diagram.png2016-03-14T09:32:56Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27857File:Database diagram.png2016-03-14T09:32:23Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27856File:Database diagram.png2016-03-14T09:31:02Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Database diagram.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27823Projets-2015-2016-IaaS Docker2016-03-11T11:31:11Z<p>Romain.Badamo-Barthelemy: /* Containers' automatic deployment */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be used to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients and their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27822Projets-2015-2016-IaaS Docker2016-03-11T11:10:10Z<p>Romain.Badamo-Barthelemy: /* Week 7: Mars 7th - Mars 13th */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be used to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27821Projets-2015-2016-IaaS Docker2016-03-11T11:09:50Z<p>Romain.Badamo-Barthelemy: /* Week 6: February 29th - Mars 6th */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployment:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contain only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now its behaviour allows us to have different disk usage for each instance. Now we use a cron job, we won't need anymore to launch the script by ourselves<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough informations about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually develop some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be use to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27820Projets-2015-2016-IaaS Docker2016-03-11T11:04:33Z<p>Romain.Badamo-Barthelemy: /* Week 4: February 15th - February 21st */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish a connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implement bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployement:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contains only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now it's behaviour allow us to have different disk usage for each instance. Now use a cron job, we won't need anymore to launch the script by ourself<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough information about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually development some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be use to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Global Architecture ==<br />
[[File:General_schema_IaaS.png|center|thumb|1000px|Global architecture]]<br />
<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27815Projets-2015-2016-IaaS Docker2016-03-11T10:45:10Z<p>Romain.Badamo-Barthelemy: /* Week 3: February 8th - February 14th */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which makes it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatizes user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implements bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployement:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contains only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now it's behaviour allow us to have different disk usage for each instance. Now use a cron job, we won't need anymore to launch the script by ourself<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough information about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually development some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be use to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27814Projets-2015-2016-IaaS Docker2016-03-11T10:41:06Z<p>Romain.Badamo-Barthelemy: /* Week 2: February 1st - February 7th */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Specification of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which make it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatize user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implements bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployement:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contains only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now it's behaviour allow us to have different disk usage for each instance. Now use a cron job, we won't need anymore to launch the script by ourself<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough information about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually development some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be use to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Provider and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27811Projets-2015-2016-IaaS Docker2016-03-11T10:38:41Z<p>Romain.Badamo-Barthelemy: /* Roadmap */</p>
<hr />
<div>[[Image:collaborativIaas.jpg|right|400px]]<br />
<br />
= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez<br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
[[Media:CahierdeschargesIaas.pdf|Specifications (written in French)]]<br />
<br />
[[Media:RapportMPI_Iaas.pdf|Management of innovative projects (MPI) report (written in French)]]<br />
<br />
= Roadmap =<br />
Our waffle shows our current roadmap and the different tasks we are working on.<br />
The aim of this section is to gather all the ideas we have which would be good to implement in the future to improve the service (after the end of our project).<br />
<br />
'''User experience:'''<br />
* Add a way to report bad behaviour of providers or clients<br />
* Implement public profile: at the moment, users can only access their private profile. We imagine that we can consult providers profile to see which one is best rated<br />
* Add the possibility for clients to use the rating system to choose only the best rated providers (special package, more expensive of course)<br />
<br />
'''Monetary system:'''<br />
* Implement monetary system for providers and clients<br />
* Set different possible packages at different prices and for different levels of service<br />
<br />
'''Algorithms:'''<br />
* Implement algorithm that optimize geographic allocation between providers and clients (better network): it's better for both clients and providers to be on the same geographical area<br />
* Implement active replication in case a provider suddenly stops his machine<br />
* Reallocate the instances to another provider when the first one decides to cleanly stop his machine (docker commit/docker pull)<br />
* Optimize disk usage and bandwidth allocation<br />
<br />
'''Security:'''<br />
* Find way to prevent provider to enter in the instance and do whatever he wants, or see what each instances running on his machines contains (difficult): since providers are admin of their machine they can see what the containers contain, or enter the containers. It would be good to guarantee clients that their instances are totally safe, and no one, including the provider, can access their information.<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
[[File:gantt0309_iaas.png|center|thumb|1000px|Gantt chart at March 9th]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access to containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Speficication of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which make it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatize user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implements bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployement:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Create a cron job that will run a command every 30 seconds: that command will be used to send the file that contains container's information to rabbitmq server<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contains only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
** Modify watchdog functioning: up to now, the script was just checking if each instance was respecting a limit. Now it's behaviour allow us to have different disk usage for each instance. Now use a cron job, we won't need anymore to launch the script by ourself<br />
** Change monitoring system: we found an other monitoring system for Docker called cAdvisor which gives us enough information about containers.<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
*Test and feedback:<br />
** Set up the main test: container deployment and access to instance from the client<br />
** Some permissions on coordinator instance needed to be changed<br />
** SSH default configuration needed to be changed to: disable root login and authentication by password<br />
** Connection from client to his instance is working<br />
=> The main development phase is finished since we have a working base. We still need to improve some things, eventually development some advanced functionalities during the last two weeks.<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
* Finish creating the flyer<br />
* Start to write report for our last MPI course<br />
* End of Rabbitmq set up on front-end<br />
* Add a rating system which will be use to give a mark to providers.<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
= Product perspective =<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
= System Architecture =<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Coordinator and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers' automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]<br />
* [https://github.com/meteorhacks/npm meteorhacks:npm Installation Instructions]<br />
<br />
== RabbitMQ related ==<br />
* [https://www.npmjs.com/package/amqplib amqlib , the library used to publish]<br />
* [https://www.rabbitmq.com/getstarted.html RabbitMQ Tutorials]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker-SRS&diff=27797Projets-2015-2016-IaaS Docker-SRS2016-03-10T12:20:49Z<p>Romain.Badamo-Barthelemy: /* Logical database requirement */</p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| TBC<br />
| TBC<br />
| TBC<br />
| TBC<br />
| TBC<br />
<br />
|}<br />
=Introduction=<br />
<br />
==Purpose of the requirements document==<br />
<br />
:*This Software Requirements Specification (SRS) identifies the requirements for the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project .<br />
:*This document is a guideline about the functionalities offered and the problems that the system solves.<br />
<br />
==Scope of the product==<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
==Definitions, acronyms and abbreviations==<br />
<br />
==References==<br />
<br />
:*The main page of the project: [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]]<br\><br />
<br />
==Overview of the remainder of the document==<br />
<br />
:The rest of the SRS examines the specifications of the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project in details. Section two of the SRS presents the general factors that affect the [[Projets-2015-2016-IaaS Docker | Collaborative Iaas]] project and its requirements, such as user characteristics and project constraints. Section three outlines the detailed, specific and functional requirements, performance, system and other related requirements of the project. Supporting information about appendices is provided in Section three.<br />
<br />
=General description=<br />
<br />
==Product perspective==<br />
[[File:IaasContextDiagram.png|center|thumb|1000px|Context diagram]]<br />
<br />
[[File:IaasUseCase.png|center|thumb|1000px|Use case diagram]]<br />
<br />
==Product functions==<br />
<br />
:*Client :<br />
:** Register to the service<br />
:** Ask for an instance precising required resources<br />
:** Launch an instance<br />
:** Connect to running instance<br />
:** Stop an instance<br />
:** Remove an instance<br />
:** Give a mark to a provider<br />
<br />
:*Provider :<br />
:** Register to the service<br />
:** Download required files<br />
:** Launch coordinator system<br />
:** Provide some of his resources<br />
:** Start providing<br />
:** Stop providing<br />
:** See resources consumption of the instances running on his machine<br />
<br />
== User characteristics==<br />
:* The client/provider doesn't have to be familiar with programming<br />
:* The client/provider should know unix basics<br />
:* The client/provider should know how ssh works<br />
:* The provider should know how to launch a script in a terminal<br />
<br />
==General constraints==<br />
:*System constraint:<br />
:** The provider's machine must run on a unix system (Ubuntu for example)<br />
:** The provider must have Docker installed<br />
<br />
:*Environment constraint:<br />
:** Internet access is required to use the service<br />
<br />
==Assumptions and dependencies==<br />
:* The client/provider have an internet access<br />
<br />
=Specific requirements, covering functional, non-functional and interface requirements=<br />
<br />
==Functional requirements==<br />
:* The system must allow user to create their profile<br />
:* The system must allow client to ask for an instance precising required resources<br />
:* The system must allow client to launch an instance<br />
:* The system must allow client to connect to running instance<br />
:* The system must allow client to stop an instance<br />
:* The system must allow client to remove an instance<br />
:* The system should allow client to give a mark to a provider<br />
<br />
<br />
:* The system must allow client to download required files<br />
:* The system must allow client to launch coordinator system<br />
:* The system must allow client to provide some of his resources<br />
:* The system must allow client to start providing<br />
:* The system must allow client to stop providing<br />
:* The system should allow client to see resources consumption of the instances running on his machine<br />
<br />
==Performance requirements==<br />
<br />
==Design constraints==<br />
<br />
==Logical database requirement==<br />
[[File:Database_diagram.png|center|thumb|1000px|Database diagram]]<br />
<br />
==Software System attributes==<br />
<br />
===Reliability===<br />
<br />
The system must deliver correct informations all the time, so that : <br />
* Clients could only connect to their instances<br />
* Clients could know status of their instances<br />
* Providers could know status of their machine<br />
<br />
===Availability===<br />
<br />
The system must be available 24h/24, 7days/7 since both providers and clients should be able to use it.<br />
<br />
===Security===<br />
<br />
The security of the service must be optimal, clients should not be able to access information on instances of other clients. Furthermore, providers should also not be able to access container's information.<br />
<br />
===Maintainability===<br />
<br />
Updates have to be easy to do, in order to add new functionalities, improve the service easily.<br />
<br />
===Portability===<br />
:For the moment, the system will be available in Linux only for provider side<br />
:However, if packages are available on other systems, we might release the system on other OS later.<br />
<br />
==Other requirements==<br />
:*The system must be able to run on Linux 14 or higher<br />
:*The system must not consume too much CPU<br />
:*The system must not consume too much Memory<br />
<br />
=Product evolution=<br />
<br />
=Appendices=<br />
<br />
==Specification ==<br />
* The global project's page can be found [[Projets-2015-2016-IaaS Docker | here]].<br />
<br />
==Licensing Requirements==<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Database_diagram.png&diff=27796File:Database diagram.png2016-03-10T12:20:19Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=27435Projets-2015-2016-IaaS Docker2016-02-29T15:23:53Z<p>Romain.Badamo-Barthelemy: /* Week 6: February 29th - Mars 6th */</p>
<hr />
<div>= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''DUT Students'''<br />
<br />
* BONNARD Loïc<br />
* CAPERAN Théo<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez <br />
<br />
== Deliverables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
= Planning =<br />
<br />
[[File:gantt_iaas.png|center|thumb|1000px|Preliminary Gantt chart]]<br />
<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, providers are admin and can easily access to containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Speficication of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which make it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatize user creation, images creation and build, coordinator's and shinken's containers launch<br />
* At the end of the week, the prototype is working: we can launch an instance an a provider machine from the front-end. We still need to establish and test the connection between a client and his instance. We have a good cornerstone of our project yet.<br />
<br />
=== Week 4: February 15th - February 21st ===<br />
* Try to establish connection between a client and his container<br />
* Continue client/provider's web page development on front-end<br />
* Start editing help page<br />
* Correct some responsive effects on the site<br />
* Container deployment: <br />
** Implements bandwidth restriction<br />
** Create script that automatically set client public key in container's authorized_keys file, modify some script to automatically delete client public key in coordinator's authorized_keys file<br />
* Start to study and set up Rabbitmq (publish from provider to front-end for example)<br />
<br />
=== Week 5: February 22nd - February 28th (Vacation) ===<br />
* Update wiki/help page, work on some responsive issues on the website<br />
* Establish script that automatically create SSH-jump config for the client<br />
* Work on foreign keys and database (front-end side)<br />
* Continue front-end development<br />
* Establish rabbitmq on both front-end side and provider side<br />
<br />
=== Week 6: February 29th - Mars 6th ===<br />
* Container's deployement:<br />
** Modify coordinator Dockerfile to install nodejs<br />
** Modify existing scripts and new ones to use nodejs and rabbitmq to send information about containers<br />
** Modify coordinator to set up 2 users: one for the front-end and one for the clients. Each one will contains only the public key they need in authorized_keys' file<br />
** Modify startProvider script to check is ssh-server is installed and running on provider, and change default port (22 to 22000)<br />
<br />
* Frontend dev:<br />
** Generate a proper & unique instance name : <username>-<provider_domain_name>-<num_instance_user_at_provider> eg. : toto-domain1-0<br />
** Add form to modify provider machines informations<br />
** Fix warning "CSS file deliver as html file" by Meteor<br />
** Add README to explain how to use scripts, how files are organized (for github branch : frontendWebui , docker , master )<br />
** Improve user feedback (notifications) on errors/success<br />
** Proper parameters to start/stop instances<br />
** Add username field in profile<br />
** Resolve bugs occurring when the machines allocate resources from a different user<br />
<br />
=== Week 7: Mars 7th - Mars 13th ===<br />
<br />
=== Week 8: Mars 14th - Mars 18th ===<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
<br />
= System Architecture =<br />
== Instances allocation ==<br />
[[File:Infrastructure_globale.png|center|thumb|1000px|Global infrastructure]]<br />
<br />
== SSH connections to allocated instances ==<br />
[[File:Infra_generale_network.png|center|thumb|1000px|Network global infrastructure]] <br />
<br />
[[File:Legend_infra.png|center|thumb|1000px|Caption]]<br />
<br />
== Coordinator and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on provider side. This includes launching coordinator instance and monitoring instance (shinken). The coordinator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
[[File:Provider_functioning.png|center|thumb|1000px|Provider functioning]]<br />
<br />
== Build and run ==<br />
<br />
'''First step: user creation'''<br />
<br />
Since we can only interact with the coordinator instance from the front-end, we need a way to launch new container. It's not possible to do so from a container, and that task need to be done from the host. That's why the first step is to create a new user on provider machine we will use to launch new containers or stop them. The moment it is done, we deploy necessary scripts in this user's home. Those scripts are necessary to launch and stop new containers. It is simpler for us to do so than transferring those files from the coordinator to the host when the connection is established.<br />
<br />
'''Second step: images creation'''<br />
<br />
Then, the second step consists in building coordinator and monitoring (shinken) images. To do so we use Dockerfile that allow us to build a container containing all we need. The coordinator instance just contains a ssh web-server. That container exposes it's port 22 and will be used as a jump host to connect the front-end/clients to the other instances.<br />
<br />
'''Third step: coordinator and monitoring instance deployment'''<br />
<br />
Finally, when the images are successfully built, we can run these containers on Docker deamon. We are now able to connect the front-end to the coordinator instance and deploy instances.<br />
<br />
== Resources management ==<br />
<br />
Docker already provides some functionalities which allow us to restrict CPU and memory usage. However, we needed to implement some functionalities ourselves like space disk usage and bandwidth restriction.<br />
<br />
'''CPU:''' To restrict CPU usage, we just need to know the hyper-threading coefficient and remember which CPU is already used. There is a Docker option we can use while launching container that allow us to choose which CPU the container will use to run. <br />
The example below shows how this works with 4 CPU (and hyper-threading coefficient is 2).<br />
<br />
[[File:CPUShare.png|center|thumb|1000px|CPU share]]<br />
<br />
<br />
'''Memory:''' While launching a container, we set memory soft limit as the value required/reserved by the client. The hard limit is set as the maximum memory made available by the provider In doing so, a container can use more memory that his soft limit. But if several containers are running on the same host, Docker will ensure that each container doesn't consume more memory than his soft limit.<br />
<br />
<br />
'''Disk:''' Docker doesn't seem to provide a functionality to restrict disk usage. And yet, it's really important for us to make sure that a client will not use to much space disk of the provider. To do so, we implemented a watchdog that check every 30 seconds the disk usage of each container and stop them if they reach the limit defined by the provider. We also use that watchdog to inspect and save container's information that will be used on the front-end to display container's state and space disk usage. Thanks to that, clients will know if they are about to reach the limit.<br />
<br />
<br />
'''Bandwidth:''' Since all the containers run on the same Docker network, we are able to use Wondershaper to set a limit for bandwidth usage. Then, Docker takes care to divide equitably the available bandwidth to each container.<br />
<br />
= Useful links =<br />
<br />
== Network related ==<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
<br />
== Git related ==<br />
* [https://www.atlassian.com/git/tutorials/ Git tutorials]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
<br />
== Docker related ==<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
<br />
== Meteor / MongoDB related ==<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]<br />
* [https://github.com/aldeed/meteor-collection2 Collection2 - A Meteor package that allows you to attach a schema to a Mongo.Collection]<br />
* [https://github.com/laverdet/node-fibers#futures Asynchronous call in Meteor with fibers/future]<br />
* [http://bootstrap-notify.remabledesigns.com/ Notification module used to display pretty notifications]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=Projets-2015-2016-IaaS_Docker&diff=26966Projets-2015-2016-IaaS Docker2016-02-10T15:18:10Z<p>Romain.Badamo-Barthelemy: /* Useful links */</p>
<hr />
<div>= Project presentation =<br />
== Introduction ==<br />
<br />
The objective of this project is to allow a user group (member) to pool their laptops or desktop in order to calculate big data of few users. To do so, the solution should work with Docker to virtualize user machines and control the use of resources of each machine.<br />
<br />
Project under GPLv3 licence : https://www.gnu.org/licenses/gpl-3.0.fr.html<br />
<br />
== The team ==<br />
'''RICM5 students''' <br />
<br />
* EUDES Robin<br />
* DAMOTTE Alan<br />
* BARTHELEMY Romain<br />
* MAMMAR Malek<br />
* GUO Kai<br />
<br />
'''DUT Students'''<br />
<br />
* BONNARD Loïc<br />
* CAPERAN Théo<br />
<br />
'''Supervisors'''<br />
Pierre-Yves Gibello (Linagora), Vincent Zurczak (Linagora), Didier Donsez <br />
<br />
== Delivarables ==<br />
[https://github.com/EudesRobin/iaas-collaboratif Github repository]<br />
<br />
[https://waffle.io/EudesRobin/iaas-collaboratif Waffle.io]<br />
<br />
= Planning =<br />
=== Week 1: January 25th - January 31th ===<br />
* Getting familiar with Docker (for some of the group members)<br />
* Fix Docker's DNS issue using public network (wifi-campus/eduroam)<br />
* Contacting our supervisors<br />
* First thoughts on this project, what we could do<br />
* Redaction of specifications, creation of architecture diagrams<br />
* Create scripts that start/stop containers automatically (some modifications still need to be done)<br />
<br />
=== Week 2: February 1st - February 7th ===<br />
* Manage and limit space disk usage of each container, limit resources allocation at containers' launch.<br />
** CPU and memory allocation: ok<br />
** Docker doesn't seem to implement easy way to limit container's disk usage: implementing a watchdog (script) which will check container's disk usage and stop those that exceed a limit<br />
* Think about restricted access to Docker containers: for the moment, collaborators are admin and can easily access to containers<br />
* See how instances can easily give their network information to coordinator <br />
* Get familiar with Shinken and study the possibilities<br />
* Speficication of technologies used<br />
* End of specification redaction + feedback from tutors<br />
* Start to work on Meteor-AngularJS tutorials<br />
* Configure a personal VM for the frontend & setup meteor-angular on it<br />
<br />
=== Week 3: February 8th - February 14th ===<br />
* '''Objective for this week:''' get a prototype that contains a basic front-end which make it possible to launch remote Docker instance.<br />
* Container deployment: <br />
** Deploy all containers on the same network: that allows us to connect to the instances from the coordinator<br />
** Create user on host: will be used to connect ourselves in ssh from coordinator instance to host and launch deployment scripts<br />
** Create script that totally automatize user creation, images creation and build, coordinator's and shinken's containers launch<br />
<br />
=What is Docker?=<br />
Docker allows you to package an application with all of its dependencies into a standardized unit for software development. <br />
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. <br />
<br />
'''Lightweight:'''<br />
Containers running on a single machine all share the same operating system kernel so they start instantly and make more efficient use of RAM. Images are constructed from layered filesystems so they can share common files, making disk usage and image downloads much more efficient.<br />
<br />
'''Open:'''<br />
Docker containers are based on open standards allowing containers to run on all major Linux distributions and Microsoft operating systems with support for every infrastructure.<br />
<br />
'''Secure:'''<br />
Containers isolate applications from each other and the underlying infrastructure while providing an added layer of protection for the application.<br />
<br />
==How is this different from virtual machines?==<br />
[[File:VM_vsContainer.png|200px|thumb|Virtual machines]][[File:Container_vsVM.png|200px|thumb|Docker containers]]<br />
<br />
Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient. <br />
<br />
'''Virtual Machines:'''<br />
Each virtual machine includes the application, the necessary binaries and libraries and an entire guest operating system - all of which may be tens of GBs in size.<br />
<br />
'''Containers:'''<br />
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system. They’re also not tied to any specific infrastructure – Docker containers run on any computer, on any infrastructure and in any cloud.<br />
<br />
==How does this help you build better software?==<br />
When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software.<br />
<br />
'''Accelerate Developer Onboarding:'''<br />
Stop wasting hours trying to setup developer environments, spin up new instances and make copies of production code to run locally. With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker.<br />
<br />
'''Empower Developer Creativity:'''<br />
The isolation capabilities of Docker containers free developers from the worries of using “approved” language stacks and tooling. Developers can use the best language and tools for their application service without worrying about causing conflict issues.<br />
<br />
'''Eliminate Environment Inconsistencies:'''<br />
By packaging up the application with its configs and dependencies together and shipping as a container, the application will always work as designed locally, on another machine, in test or production. No more worries about having to install the same configs into a different environment.<br />
<br />
==Easily Share and Collaborate on Applications==<br />
<br />
Docker creates a common framework for developers and sysadmins to work together on distributed applications<br />
<br />
'''Distribute and share content:'''<br />
Store, distribute and manage your Docker images in your Docker Hub with your team. Image updates, changes and history are automatically shared across your organization.<br />
<br />
'''Simply share your application with others:'''<br />
Ship one or many containers to others or downstream service teams without worrying about different environment dependencies creating issues with your application. Other teams can easily link to or test against your app without having to learn or worry about how it works.<br />
<br />
==Ship More Software Faster==<br />
<br />
Docker allows you to dynamically change your application like never before from adding new capabilities, scaling out services to quickly changing problem areas.<br />
<br />
'''Ship 7X More:'''<br />
Docker users on average ship software 7X more after deploying Docker in their environment. More frequent updates provide more value to your customers faster.<br />
<br />
'''Quickly Scale:'''<br />
Docker containers spin up and down in seconds making it easy to scale an application service at any time to satisfy peak customer demand, then just as easily spin down those containers to only use the resources you need when you need it<br />
<br />
'''Easily Remediate Issues:'''<br />
Docker make it easy to identify issues and isolate the problem container, quickly roll back to make the necessary changes then push the updated container into production. The isolation between containers make these changes less disruptive than traditional software models.<br />
<br />
<br />
= System Architecture =<br />
== Containers allocation ==<br />
[[File:Infrastructure_globale.png]]<br />
<br />
== SSH connections to allocated containers ==<br />
[[File:Infra_generale_network.png]] <br />
<br />
[[File:Legend_infra.png]]<br />
<br />
== Coordinator and Frontend details ==<br />
<br />
[[File:Coordinator.png]][[File:Frontend.png]]<br />
<br />
= Containers automatic deployment =<br />
<br />
The aim of this part is to automatize the containers deployment on collaborator side. This includes launching collaborator instance and monitoring instance (shinken). The collaborator instance will allow us to launch new containers and establish the link between clients et their containers.<br />
<br />
== First step: user creation ==<br />
<br />
== Second step: images creation ==<br />
<br />
== Third step: collaborator and monitoring instance deployment ==<br />
<br />
= Useful links =<br />
<br />
* [https://gist.github.com/jexchan/2351996 Multiple SSH keys for github]<br />
* [https://help.github.com/articles/changing-a-remote-s-url/ Switch https to ssh - remote url github]<br />
* [https://docs.docker.com/ Docker official website]<br />
* [http://2.bp.blogspot.com/-ZGXYBT4l9II/U5BFJwe_jWI/AAAAAAAAGB4/-Le5-NavlGg/s1600/docker-cycle.png Docker cycle]<br />
* [https://www.wanadev.fr/tuto-debuter-et-comprendre-docker/ Understand how Docker works]<br />
* [http://www.occitech.fr/blog/2014/10/tuto-docker-hello-world/ Docker tutorial - Hello world (understand basic commands)]<br />
* [https://robinwinslow.uk/2014/08/27/fix-docker-networking/ Fix Docker's DNS issue with public network]<br />
* [https://wiki.gentoo.org/wiki/SSH_jump_host SSH Jump Host]<br />
* [http://www.angular-meteor.com/ Angular Meteor official website]<br />
* [https://www.mongodb.org/ MongoDB, the database used]</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=25494ECOM RICM5 Groupe1 20152015-12-14T21:57:42Z<p>Romain.Badamo-Barthelemy: /* Liens */</p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br><br />
Lien vers notre [https://github.com/Patator2/ECOM-RICM5-2015-2016---Sushi-party dépôt github]<br />
<br />
=Résumé du projet=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble, et est tuteuré par Didier Donsez (partie système) et Sybille Caffiau (partie IHM). Sa durée est fixée à 12 semaines.<br />
SushiWeb est une application web permettant de commander différentes gammes de sushis. Une fois la commande réalisée, il est possible de se les faire livrer à domicile ou d'aller les chercher en magasin. En ce qui concerne le paiement, l'utilisateur aura le choix entre régler par carte bancaire en ligne, ou de payer lors de la récupération de la commande.<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Romain Barthelemy: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 2 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
SushiWeb permet à l'utilisateur client de:<br />
<br />
* Créer un compte utilisateur avec login et mot de passe<br />
* Naviguer dans la catalogue<br />
* Ajouter des articles dans le panier<br />
* Gérer le panier (Supprimer des articles par exemple)<br />
* Choisir l'heure d'arrivée de la commande<br />
* Choisir le mode de réception de la commande: Livraison à domicile ou récupération de la commande en magasin<br />
* Choisir le mode de paiement: Paiement en ligne par carte bancaire ou sur place lors de la réception de la commande<br />
* Recevoir un SMS lorsque la commande est prête ou en cas de retard.<br />
<br />
SushiWeb permet à l'administrateur de:<br />
<br />
* Créer un compte Administrateur avec login et mot de passe<br />
* Modifier le catalogue: Supprimer, modifier ou ajouter des articles<br />
* Consulter les commandes <br />
* Mettre à jour l'état des commandes (exemple: commande non-traitée, commande en cours de préparation, commande prête, commandé envoyée).<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
[[File:Use_Case_Uml.jpg|500px|thumb|center|Fig. 3 : Diagramme UML des cas d'utilisation du système]]<br />
<br />
=Architecture système=<br />
<br />
[[File:Diagramme_de_composants.png|700px|thumb|center|Fig. 4 : Diagramme UML de composants]]<br />
<br />
<br />
<br />
[[File:Diagramme_de_déploiement.png|700px|thumb|center|Fig. 5 : Diagramme UML de déploiement]]<br />
<br />
=Bases de données=<br />
<br />
[[File:Diagramme_classes.jpg|800px|thumb|center|Fig. 6 : Diagramme UML de classes]]<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
[[File:Ihm_abstraite.png|1150px|thumb|center|Fig. 7 : IHM abstraite du système]]<br />
<br />
==Maquettes IHM==<br />
<br />
[[File:R_cup_ration_de_la_commande.png|800px|thumb|center|Fig. 8 : Récupérer la commande]]<br />
<br />
[[File:Naviguer_dans_le_catalogue.png|800px|thumb|center|Fig. 9 : Naviguer dans le catalogue]]<br />
<br />
[[File:Mise_jour_du_catalogue_(1).png|800px|thumb|center|Fig. 10 : Mettre à jour le catalogue]]<br />
<br />
=Scrum=<br />
=== Sprint 1 : du 8 septembre au 21 Septembre ===<br />
* Choix du sujet<br />
* Répartition des rôles (Chef de projet et scrum master)<br />
* Répartition des tâches<br />
* Identifications des besoins utilisateurs<br />
* Ciblage du marché<br />
* Création d'un sondage<br />
<br />
=== Sprint 2 : du 22 septembre au 12 Octobre ===<br />
* Analyse des résultats du questionnaire<br />
* Liste fixes des fonctionnalités ainsi que leurs priorités <br />
* Réalisation des modèles de tâches prioritaires<br />
* Réalisation des diagrammes de classe<br />
* Analyse de la concurrence. <br />
<br />
=== Sprint 3 : du 13 Octobre au 20 Octobre ===<br />
* Création de la fiche SRS<br />
* Création des maquettes IHM (avec bootstrap)<br />
* Réalisation de la couche EJB et entités du projet(priorité 1)<br />
* Implémentation des clients lourds admin et utilisateur<br />
* Mise en place de l'IHM abstraite <br />
* Mise en place de l'architecture système (diagramme de composants et diagrammes de déploiement)<br />
<br />
=== Sprint 4 : du 21 Octobre au 15 Novembre ===<br />
* Implémentation des entités JPA et des EJB associés pour les priorités 1 et 2.<br />
* Implémentation des interfaces REST des EJB suscités<br />
* Conception et programmation des tests unitaires<br />
* Intégration de Maven et Travis dans le projet<br />
* Implémentation des interfaces IHM (Angular, Bootstrap, html5, css...)<br />
<br />
===Sprint 5 : 16 Novembre au 24 Novembre ===<br />
* Fin de l'intégration de Maven<br />
* Fin de la programmation des priorités zéro au niveau EJB<br />
* Première version de l'IHM WEB d'exploration du catalogue avec Angular JS<br />
* Première version de l'IHM WEB de la gestion du panier avec Angular JS<br />
* Codage de la couche d'interfaçage des IHM web avec la couche métier via REST.<br />
<br />
===Sprint 6: 25 Novembre au 8 décembre ===<br />
* Mise en place de l'intégration continue du projet avec Travis<br />
* Mise en place des tests unitaires<br />
* Fin du codage de la couche d'interfaçage des IHM web avec la couche métier via REST.<br />
* Intégration de la sécurité dans le projet<br />
* Gestion des envois de mails automatiques<br />
<br />
===Sprint 7: 9 décembre 15 décembre ===<br />
* Terminaison des dernières interfaces associées aux priorités 1<br />
* Mise en place de l'interface administrateur<br />
* Mise en place d'un système login/mot de passe<br />
* Rédaction des différents rapports: Charte graphique, Conception système, dossier d'évaluation, rapport de charge (benchmark), rapport sur les métriques logicielles <br />
*Préparation de la présentation</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=VT2015_Rust_Programming_Language&diff=24876VT2015 Rust Programming Language2015-10-25T13:13:00Z<p>Romain.Badamo-Barthelemy: Created page with "==Présentation== Enseignants : D. Donsez, GP. Bonneau Sujet : The Rust Programing Language ==Abstract== Inspired by many predecessors, the new programming language Rust d..."</p>
<hr />
<div>==Présentation==<br />
<br />
Enseignants : D. Donsez, GP. Bonneau<br />
<br />
Sujet : The Rust Programing Language<br />
<br />
<br />
==Abstract==<br />
Inspired by many predecessors, the new programming language Rust developed by Mozilla is designed to combine most of their features in a single language. While some of those strengths seem to be impossible to reunite at first, Rust offers a solution allowing to combine two major advantages in programming: security and control.<br />
<br />
==Résumé==<br />
Inspiré par de nombreux prédécesseurs, le nouveau langage de programmation Rust développé par Mozilla propose de combiner la plupart de leurs forces en un seul langage. Alors que certaines forces semblent à priori impossibles à réunir, Rust propose une solution permettant de combiner deux avantages majeurs en programmation : la sécurité et l’efficacité.<br />
<br />
==Synthèse écrite==<br />
<br />
===Origines===<br />
Rust est un langage de programmation compilé multi-paradigme conçu et développé par Mozilla Research. Bien que ce soit Mozilla qui développe principalement ce projet, le code est open source et sollicite la participation de la communauté.<br />
<br />
Son développement a été initialement réalisé dans le but d’améliorer les performances du navigateur Web Mozilla Firefox, actuellement en perte de vitesse en comparaison à son principal concurrent Google Chrome.<br />
<br />
Afin d’améliorer les performances de leur navigateur, Mozilla développe actuellement le moteur de rendu Servo avec la participation de Samsung. L’efficacité supérieure de Servo en comparaison à celle du moteur actuel, Gecko, devrait selon Mozilla rétablir la popularité de Mozilla Firefox.<br />
<br />
[[File:From_Gecko_To_Servo.png|500px|thumb|center|Fig. 1 : Changement de moteur de rendu de Mozilla firefox]]<br />
<br />
===Caractéristiques du langage===<br />
Rust est un langage sécurisé permettant la programmation fonctionnelle, impérative, orientée objet, et notamment concurrente. Sa principale caractéristique est à la fois d’effectuer de la programmation orientée bas niveau, et donc plus performante lorsqu’elle est bien gérée, mais également sûre. Rust permet d’obtenir les performances du C++ sans les segfaults et autres soucis de corruption mémoire causant un temps important pour débugger, mais également des erreurs résiduelles lors de l’exécution des utilisateurs. Un logiciel ne pouvant se permettre ce type d’erreur pourra ainsi gagner en performances en utilisant Rust à la place d’autres langages plus sécurisés mais moins axés sur la rapidité du code.<br />
<br />
===Sécurité mémoire en Rust===<br />
Rust considère que pour éviter les segfaults et autres problèmes mémoire courants dans la programmation à bas niveau, il faut empêcher les cas à risques de se produire.<br />
<br />
Les problèmes arrivent lorsque pour un même pointeur, plusieurs variables ont un accès à la valeur pointée et que l’une de ces variables modifie la valeur de manière à ce qu’une autre ne puisse plus l’utiliser comme elle le souhaitait. Pour résumer, les erreurs mémoire surviennent lorsque les situations suivantes sont mal gérées :<br />
*Accès multiples à un pointeur<br />
*Modifications de la valeur pointée<br />
<br />
[[File:Data_race.png|300px|thumb|center|Fig. 2 : Écritures parallèles à la même donnée en mémoire]]<br />
<br />
Afin d’éviter ces situations à problèmes, un pointeur est associé à un unique propriétaire qui peut ensuite autoriser un accès en lecture à d’autres fonctions sur ce pointeur. Il peut également offrir la propriété du pointeur à une autre fonction ou un autre thread de manière temporaire ou définitive. Au moment de la compilation, le compilateur vérifie les aspects suivants :<br />
*Pour tout pointeur, les accès en lecture parallèles sont autorisés, et tant qu’au moins un élément a un accès en lecture au pointeur, aucun accès en écriture n’est permis.<br />
*Tout accès en écriture sur pointeur donné est unique, les écritures parallèles sont interdites.<br />
<br />
<br />
<br />
De la même manière, les problèmes mémoire lors de la programmation concurrente surviennent lorsque des threads non ordonnés peuvent écrire en parallèle vers le même pointeur en mémoire. Les problèmes surviennent donc lorsque ces trois éléments sont combinés : <br />
*Accès multiples à un pointeur<br />
*Modifications de la valeur pointée<br />
*Absence d’ordonnancement<br />
<br />
La programmation concurrente en Rust est faite de manière à pouvoir gérer les accès aux pointeurs et les autorisations d’écriture de la même manière que lors de la programmation de base, réglant ainsi les problèmes de sécurité en programmation concurrente.<br />
<br />
<br />
<br />
Lorsque cela est absolument nécessaire, certaines restrictions du compilateur peuvent être outrepassées à l’aide de blocs '''unsafe ''' dont le contenu n’est pas aussi contraint que le reste. Il est cependant recommandé d’apporter une attention particulière lorsque des cas similaires doivent apparaître, notamment en assurant la sécurité du programme autour du bloc en question.<br />
<br />
[[File:Bloc_unsafe.png|400px|thumb|center|Fig. 3 : Exemple d’utilisation d’un bloc unsafe]]<br />
<br />
===Rust : une valeur sûre, ou un langage de passage ?===<br />
Rust est un langage de programmation encore relativement récent. Bien que la version actuelle soit stable, la syntaxe du langage n’est pas à l’abri de modifications futures. Cette même syntaxe demande un temps d’apprentissage non négligeable, à ajouter au temps d’adaptation aux contraintes du compilateur. Par ailleurs, le manque d’IDE prenant en charge Rust peut rebuter la plupart des développeurs. Ce langage peut n’être au final qu’un effet de mode soulevé par Mozilla avant de tomber dans l’oubli, ce risque n’étant pas inexistant, migrer des logiciels déjà existants pour en faire des versions développées en Rust est une manœuvre risquée.<br />
<br />
==Conclusion==<br />
Rust offre une approche de la programmation bas niveau sécurisée, permettant d’assurer un développement logiciel plus sûr et d’éviter des bugs lors de l’utilisation. Il est cependant utilisé principalement pour les développements de Mozilla, les autres entreprises préférant garder les bases précédentes. Il est possible de se dire que s’il y a des problèmes de programmation, cela vient du développeur, et donc continuer la programmation C++ qui a fait ses preuves est une meilleure option. Il reste donc à voir si Rust percera dans le monde de la programmation grâce ou s’il sera abandonné comme beaucoup d’autres nouveaux langages.</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Bloc_unsafe.png&diff=24875File:Bloc unsafe.png2015-10-25T12:33:30Z<p>Romain.Badamo-Barthelemy: Exemple d'utilisation d'un bloc unsafe</p>
<hr />
<div>Exemple d'utilisation d'un bloc unsafe</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Data_race.png&diff=24856File:Data race.png2015-10-24T13:14:46Z<p>Romain.Badamo-Barthelemy: Accès parallèles à la même donnée en mémoire</p>
<hr />
<div>Accès parallèles à la même donnée en mémoire</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:From_Gecko_To_Servo.png&diff=24839File:From Gecko To Servo.png2015-10-24T12:14:51Z<p>Romain.Badamo-Barthelemy: Futur changement du moteur de rendu de Mozilla Firefox: passage de Gecko à Servo</p>
<hr />
<div>Futur changement du moteur de rendu de Mozilla Firefox: passage de Gecko à Servo</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=VT2015&diff=24818VT20152015-10-23T02:30:14Z<p>Romain.Badamo-Barthelemy: /* 23 Octobre */</p>
<hr />
<div>[[EA2014|<< Etudes 2014]] [[EA|Sommaire]] [[VT2016|Etudes 2016 >>]]<br />
<br />
<br />
=Veille Technologique et Stratégique=<br />
* Enseignants: Georges-Pierre Bonneau, Didier Donsez<br />
* UE/Module: EAM (HPRJ9R6B) et EAR (HPRJ9R4B) en RICM5<br />
<br />
L'objectif des études approfondissement est de réaliser un travail de synthèse et d’évaluation sur une technologie / spécification / tendance<br />
<br />
Dans votre futur vie d'ingénieur, vous aurez à d'une part, vous former par vous-même sur une technologie émergente et d'autre part à réaliser une veille technologique (et stratégique) par rapport à votre entreprise et projet.<br />
Il s'agira de réaliser<br />
* le positionnement par rapport au marché<br />
* d'être critique<br />
<br />
Votre synthèse fait l'objet d'une présentation orale convaincante devant un auditoire (dans le futur, vos collègues, vos chefs ou vos clients) avec des transparents et un discours répété.<br />
Pour finir de convaincre (Saint Thomas), vous ferez la présentation d'une démonstration.<br />
<br />
Votre présentation sera noté et commenté par tous vos camarades via un formulaire (téléphone mobile). Leurs notes et leurs commentaires seront notés en fonction de leur exactitude de jugement.<br />
<br />
La présentation peut être réalisée avec [[reveal.js]]<br />
<br />
[[File:presentation-VT-RICM5-1516.pdf|transparents d'introduction à l'UE]]<br />
<br />
<br />
[[Image:ChoixSujetsVT2015.png|500px|center|Affectation des Sujets]]<br />
<br />
==Planning==<br />
<br />
<br />
====02 Octobre====<br />
Didier DONSEZ<br />
<br />
Sujets : 1, 15,19,32<br />
<br />
* 1. Sébastien Toussaint, [https://securityblog.redhat.com/2015/09/02/factoring-rsa-keys-with-tls-perfect-forward-secrecy/ Factoring RSA Keys With TLS Perfect Forward Secrecy], démonstration d'une attaque, [[VT2015_Factoring_RSA|Fiche de synthèse]], [[Media:VT2015_Factoring_RSA.pdf|Transparents]]<br />
* 32. Géolocalisation Indoor : Google [[Eddystone]], Apple [[iBeacon]], [[AltBeacon]], [[VT2015_Geolocalisation_Indoor|Fiche de synthèse]], [[Media:VT2015_Geolocalisation_Indoor.pdf|Transparents]]<br />
* 15. [[Graph Databases]], [[VT2015_Graph_Databases|Fiche de synthèse]], [[Media:VT2015_Graph_Databases.pdf|Transparents]]<br />
* 19. [[Software Forensics]] : cas d'étude Linagora vs Bluemind (vous compléterez les pages en et fr de Wikipédia sur ce sujet). , [[VT2015_Software_Forensics|Fiche de synthèse]], [[Media:VT2015_Software_Forensics.pdf|Transparents]]<br />
<br />
====09 Octobre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 22, 13, 17, 25<br />
* 13 : Xueyong Qian, Intelligent Personal Assistant, [[VT2015_Intelligent_Personal_Assistant|Fiche de synthèse]],[[Media:Intelligent_Personal_Assistant.pdf|Transparents]]<br />
* 17 : Christophe Adam, Rendu Expressif, [[VT2015_Rendu_Expressif|Fiche de synthèse]], [[Media:Rendu_Expressif2.pdf|Transparents]]<br />
* 25 : Vivien Michel, Visualisation de journaux : Kibana/Logstash, [[VT2015_Kibana_Logstash|Fiche de synthèse]], [[Media:Kibana_Logstash.pdf|Transparents]]<br />
* 22: Sarah Aissanou, Reconnaissance de la parole, [[VT2015/Speech_Recognition|Fiche de synthèse]], [[Media:La_reconnaissance_de_la_parole.pdf|Transparents]]<br />
<br />
====16 Octobre====<br />
Didier DONSEZ<br />
<br />
Sujets : 7,8,12<br />
<br />
* Evolution(s) de HTTP : [[HTTP 2.0]], [[SPDY]], ... [[VT2015_HTTP20|Fiche de synthèse]],[[Media:VT2015_HTTP20.pdf|Transparents]]<br />
* [[Quick UDP Internet Connection (QUIC)]] [[VT2015_QUIC|Fiche de synthèse]],[[Media:VT2015_QUIC.pdf|Transparents]]<br />
* Simulateurs de Smart Cities : UrbanSIM, CanVis, Suicidator City Generator, [[Blended Cities]], ... [[VT2015_SimSmartCities|Fiche de synthèse]],[[Media:VT2015_SimSmartCities.pdf|Transparents]]<br />
<br />
====23 Octobre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 14, 30, 6, 29<br />
<br />
* 14 : KLIPFFEL Tararaina : [Complex Event Processing] : [[VT2015_Complex_Event_Processing2|Fiche de synthèse]], [[File:Complex_Event_Processing.pdf|Slides de présentation]]<br />
* 29 : BARTHELEMY Romain : The [[Rust]] Programming Language : [[VT2015_Rust_Programming_Language|Fiche de synthèse]], [[Media:Rust_Programming_Language.pdf|Transparents]]<br />
<br />
====6 Novembre====<br />
Didier DONSEZ (seul)<br />
<br />
Sujets : 24, 26, 27, 10<br />
<br />
====13 Novembre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 2, 3, 20<br />
<br />
==Liste des Sujets==<br />
# [https://securityblog.redhat.com/2015/09/02/factoring-rsa-keys-with-tls-perfect-forward-secrecy/ Factoring RSA Keys With TLS Perfect Forward Secrecy], démonstration d'une attaque --> Sébastien Toussaint (OBLIGATOIREMENT)<br />
# [[Structural Health Monitoring]] <br />
# [[CryptoMoney]] ([[BitCoin]], ...)<br />
# [[Memcached]] : Usages, Patrons d'arhitecture. Démonstration sur votre projet eCOM pour les ressources multimédia (images, videos, ...). Démonstration de l'[http://dev.mysql.com/doc/ndbapi/en/ndbmemcache.html API Memcache API for MySQL Cluster] et de [[Redis.io]]<br />
# [[In-Memory Databases]]<br />
# [[NewSQL]]<br />
# Evolution(s) de HTTP : [[HTTP 2.0]], [[SPDY]], ...<br />
# [[Quick UDP Internet Connection (QUIC)]]<br />
# [[Cloud Foundry]]<br />
# [[OpenStack]]<br />
# [[MicroServices]]<br />
# Simulateurs de Smart Cities : UrbanSIM, CanVis, Suicidator City Generator, [[Blended Cities]], ...<br />
# [[Digital Assistant]] : démonstration de UMich Sirius<br />
# (Complex) [[Event Stream Processing]] : démonstration de [[Apache Storm]] et de [[Spark Streaming]] sur le cloud [[Azure]]. Démo supplémentaire du SaaS [[IFTTT]]<br />
# [[Software Forensics]] : cas d'étude Linagora vs Bluemind (vous compléterez les pages en et fr de Wikipédia sur ce sujet).<br />
# [[Privacy policy guidelines]] : Démo: application à votre projet [[ECOM-RICM]]<br />
# [[Rendu expressif]] ([[hatching surface rendering]])<br />
# [[FPGA]]<br />
# [[Graph Databases]]<br />
# [[Akka]]<br />
# [[Continuous Delivery]]<br />
# [[Speech Recognition]]<br />
# Protocoles, Formats et Plateformes pour le bâtiment intelligent : [[oBIX]], ... démonstration d'[[IoTSys]]<br />
# Orchestration Tools : Puppet vs. Chef vs. Ansible vs. Salt (pour 2)<br />
# Visualisation de Journaux : démonstration de [[Kibana]] et [[Logstash]] : démonstration sur les logs de votre projet [[eCOM]]<br />
# Protocoles de consensus et applications : [[Paxos]], [[Raft]] et Canevas de consensus : [[Zookeeper]], [[Curator]], [[Etcd]]<br />
# [[Apache Mesos]], [[Borg]], [[Kubernete]], [[Alibaba Fuxi]] : démonstration sur le projet [[eCOM]]<br />
# Cluster Management : [[Apache Helix]]<br />
# The [[Rust]] Programming Language<br />
# [[SQL-on-Hadoop]] : [[Pinot]]<br />
# [[A/B Testing]] @ Internet Scale ([http://fr.slideshare.net/courseratalks/talkscoursera-ab-testing-internet-scale voir])<br />
# Géolocalisation Indoor : Google [[Eddystone]], Apple [[iBeacon]], [[AltBeacon]]<br />
# Performance Debugging : The [http://brendangregg.com/usemethod.html USE Method].</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Rust_Programming_Language.pdf&diff=24817File:Rust Programming Language.pdf2015-10-23T02:29:01Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=VT2015&diff=24766VT20152015-10-21T10:54:21Z<p>Romain.Badamo-Barthelemy: /* 23 Octobre */</p>
<hr />
<div>[[EA2014|<< Etudes 2014]] [[EA|Sommaire]] [[VT2016|Etudes 2016 >>]]<br />
<br />
<br />
=Veille Technologique et Stratégique=<br />
* Enseignants: Georges-Pierre Bonneau, Didier Donsez<br />
* UE/Module: EAM (HPRJ9R6B) et EAR (HPRJ9R4B) en RICM5<br />
<br />
L'objectif des études approfondissement est de réaliser un travail de synthèse et d’évaluation sur une technologie / spécification / tendance<br />
<br />
Dans votre futur vie d'ingénieur, vous aurez à d'une part, vous former par vous-même sur une technologie émergente et d'autre part à réaliser une veille technologique (et stratégique) par rapport à votre entreprise et projet.<br />
Il s'agira de réaliser<br />
* le positionnement par rapport au marché<br />
* d'être critique<br />
<br />
Votre synthèse fait l'objet d'une présentation orale convaincante devant un auditoire (dans le futur, vos collègues, vos chefs ou vos clients) avec des transparents et un discours répété.<br />
Pour finir de convaincre (Saint Thomas), vous ferez la présentation d'une démonstration.<br />
<br />
Votre présentation sera noté et commenté par tous vos camarades via un formulaire (téléphone mobile). Leurs notes et leurs commentaires seront notés en fonction de leur exactitude de jugement.<br />
<br />
La présentation peut être réalisée avec [[reveal.js]]<br />
<br />
[[File:presentation-VT-RICM5-1516.pdf|transparents d'introduction à l'UE]]<br />
<br />
<br />
[[Image:ChoixSujetsVT2015.png|500px|center|Affectation des Sujets]]<br />
<br />
==Planning==<br />
<br />
<br />
====02 Octobre====<br />
Didier DONSEZ<br />
<br />
Sujets : 1, 15,19,32<br />
<br />
* 1. Sébastien Toussaint, [https://securityblog.redhat.com/2015/09/02/factoring-rsa-keys-with-tls-perfect-forward-secrecy/ Factoring RSA Keys With TLS Perfect Forward Secrecy], démonstration d'une attaque, [[VT2015_Factoring_RSA|Fiche de synthèse]], [[Media:VT2015_Factoring_RSA.pdf|Transparents]]<br />
* 32. Géolocalisation Indoor : Google [[Eddystone]], Apple [[iBeacon]], [[AltBeacon]], [[VT2015_Geolocalisation_Indoor|Fiche de synthèse]], [[Media:VT2015_Geolocalisation_Indoor.pdf|Transparents]]<br />
* 15. [[Graph Databases]], [[VT2015_Graph_Databases|Fiche de synthèse]], [[Media:VT2015_Graph_Databases.pdf|Transparents]]<br />
* 19. [[Software Forensics]] : cas d'étude Linagora vs Bluemind (vous compléterez les pages en et fr de Wikipédia sur ce sujet). , [[VT2015_Software_Forensics|Fiche de synthèse]], [[Media:VT2015_Software_Forensics.pdf|Transparents]]<br />
<br />
====09 Octobre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 22, 13, 17, 25<br />
* 13 : Xueyong Qian, Intelligent Personal Assistant, [[VT2015_Intelligent_Personal_Assistant|Fiche de synthèse]],[[Media:Intelligent_Personal_Assistant.pdf|Transparents]]<br />
* 17 : Christophe Adam, Rendu Expressif, [[VT2015_Rendu_Expressif|Fiche de synthèse]], [[Media:Rendu_Expressif2.pdf|Transparents]]<br />
* 25 : Vivien Michel, Visualisation de journaux : Kibana/Logstash, [[VT2015_Kibana_Logstash|Fiche de synthèse]], [[Media:Kibana_Logstash.pdf|Transparents]]<br />
* 22: Sarah Aissanou, Reconnaissance de la parole, [[VT2015/Speech_Recognition|Fiche de synthèse]], [[Media:La_reconnaissance_de_la_parole.pdf|Transparents]]<br />
<br />
====16 Octobre====<br />
Didier DONSEZ<br />
<br />
Sujets : 7,8,12<br />
<br />
* Evolution(s) de HTTP : [[HTTP 2.0]], [[SPDY]], ... [[VT2015_HTTP20|Fiche de synthèse]],[[Media:VT2015_HTTP20.pdf|Transparents]]<br />
* [[Quick UDP Internet Connection (QUIC)]] [[VT2015_QUIC|Fiche de synthèse]],[[Media:VT2015_QUIC.pdf|Transparents]]<br />
* Simulateurs de Smart Cities : UrbanSIM, CanVis, Suicidator City Generator, [[Blended Cities]], ... [[VT2015_SimSmartCities|Fiche de synthèse]],[[Media:VT2015_SimSmartCities.pdf|Transparents]]<br />
<br />
====23 Octobre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 14, 30, 6, 29<br />
<br />
* 14 : KLIPFFEL Tararaina : [Complex Event Processing] : [[VT2015_Complex_Event_Processing|Fiche de synthèse]], [[File:Complex_Event_Processing.pdf|Slides de présentation]]<br />
* 29 : BARTHELEMY Romain : The [[Rust]] Programming Language : [[VT2015_Rust_Programming_Language|Fiche de synthèse]], [[File:Rust_Programming_Language.pdf|Transparents]]<br />
<br />
====6 Novembre====<br />
Didier DONSEZ (seul)<br />
<br />
Sujets : 24, 26, 27, 10<br />
<br />
====13 Novembre====<br />
Georges-Pierre BONNEAU<br />
<br />
Sujets : 2, 3, 20<br />
<br />
==Liste des Sujets==<br />
# [https://securityblog.redhat.com/2015/09/02/factoring-rsa-keys-with-tls-perfect-forward-secrecy/ Factoring RSA Keys With TLS Perfect Forward Secrecy], démonstration d'une attaque --> Sébastien Toussaint (OBLIGATOIREMENT)<br />
# [[Structural Health Monitoring]] <br />
# [[CryptoMoney]] ([[BitCoin]], ...)<br />
# [[Memcached]] : Usages, Patrons d'arhitecture. Démonstration sur votre projet eCOM pour les ressources multimédia (images, videos, ...). Démonstration de l'[http://dev.mysql.com/doc/ndbapi/en/ndbmemcache.html API Memcache API for MySQL Cluster] et de [[Redis.io]]<br />
# [[In-Memory Databases]]<br />
# [[NewSQL]]<br />
# Evolution(s) de HTTP : [[HTTP 2.0]], [[SPDY]], ...<br />
# [[Quick UDP Internet Connection (QUIC)]]<br />
# [[Cloud Foundry]]<br />
# [[OpenStack]]<br />
# [[MicroServices]]<br />
# Simulateurs de Smart Cities : UrbanSIM, CanVis, Suicidator City Generator, [[Blended Cities]], ...<br />
# [[Digital Assistant]] : démonstration de UMich Sirius<br />
# (Complex) [[Event Stream Processing]] : démonstration de [[Apache Storm]] et de [[Spark Streaming]] sur le cloud [[Azure]]. Démo supplémentaire du SaaS [[IFTTT]]<br />
# [[Software Forensics]] : cas d'étude Linagora vs Bluemind (vous compléterez les pages en et fr de Wikipédia sur ce sujet).<br />
# [[Privacy policy guidelines]] : Démo: application à votre projet [[ECOM-RICM]]<br />
# [[Rendu expressif]] ([[hatching surface rendering]])<br />
# [[FPGA]]<br />
# [[Graph Databases]]<br />
# [[Akka]]<br />
# [[Continuous Delivery]]<br />
# [[Speech Recognition]]<br />
# Protocoles, Formats et Plateformes pour le bâtiment intelligent : [[oBIX]], ... démonstration d'[[IoTSys]]<br />
# Orchestration Tools : Puppet vs. Chef vs. Ansible vs. Salt (pour 2)<br />
# Visualisation de Journaux : démonstration de [[Kibana]] et [[Logstash]] : démonstration sur les logs de votre projet [[eCOM]]<br />
# Protocoles de consensus et applications : [[Paxos]], [[Raft]] et Canevas de consensus : [[Zookeeper]], [[Curator]], [[Etcd]]<br />
# [[Apache Mesos]], [[Borg]], [[Kubernete]], [[Alibaba Fuxi]] : démonstration sur le projet [[eCOM]]<br />
# Cluster Management : [[Apache Helix]]<br />
# The [[Rust]] Programming Language<br />
# [[SQL-on-Hadoop]] : [[Pinot]]<br />
# [[A/B Testing]] @ Internet Scale ([http://fr.slideshare.net/courseratalks/talkscoursera-ab-testing-internet-scale voir])<br />
# Géolocalisation Indoor : Google [[Eddystone]], Apple [[iBeacon]], [[AltBeacon]]<br />
# Performance Debugging : The [http://brendangregg.com/usemethod.html USE Method].</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=24614ECOM RICM5 Groupe1 20152015-10-19T09:04:59Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br />
<br />
=Résumé du projet=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble, et est tuteuré par Didier Donsez (partie système) et Sybille Caffiau (partie IHM). Sa durée est fixée à 12 semaines.<br />
SushiWeb est une application web permettant de commander différentes gammes de sushis. Une fois la commande réalisée, il est possible de se les faire livrer à domicile ou d'aller les chercher en magasin. En ce qui concerne le paiement, l'utilisateur aura le choix entre régler par carte bancaire en ligne, ou de payer lors de la récupération de la commande.<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Romain Barthelemy: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 2 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
SushiWeb permet à l'utilisateur client de:<br />
<br />
* Créer un compte utilisateur avec login et mot de passe<br />
* Naviguer dans la catalogue<br />
* Ajouter des articles dans le panier<br />
* Gérer le panier (Supprimer des articles par exemple)<br />
* Choisir l'heure d'arrivée de la commande<br />
* Choisir le mode de réception de la commande: Livraison à domicile ou récupération de la commande en magasin<br />
* Choisir le mode de paiement: Paiement en ligne par carte bancaire ou sur place lors de la réception de la commande<br />
* Recevoir un SMS lorsque la commande est prête ou en cas de retard.<br />
<br />
SushiWeb permet à l'administrateur de:<br />
<br />
* Créer un compte Administrateur avec login et mot de passe<br />
* Modifier le catalogue: Supprimer, modifier ou ajouter des articles<br />
* Consulter les commandes <br />
* Mettre à jour l'état des commandes (exemple: commande non-traitée, commande en cours de préparation, commande prête, commandé envoyée).<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
[[File:Use_Case_Uml.jpg|500px|thumb|center|Fig. 3 : Diagramme UML des cas d'utilisation du système]]<br />
<br />
=Architecture système=<br />
<br />
[[File:Diagramme_de_composants.png|700px|thumb|center|Fig. 4 : Diagramme UML de composants]]<br />
<br />
<br />
<br />
[[File:Diagramme_de_déploiement.png|700px|thumb|center|Fig. 5 : Diagramme UML de déploiement]]<br />
<br />
=Bases de données=<br />
<br />
[[File:Diagramme_classes.jpg|800px|thumb|center|Fig. 6 : Diagramme UML de classes]]<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
[[File:Ihm_abstraite.png|1150px|thumb|center|Fig. 7 : IHM abstraite du système]]<br />
<br />
==Maquettes IHM==<br />
<br />
[[File:R_cup_ration_de_la_commande.png|800px|thumb|center|Fig. 8 : Récupérer la commande]]<br />
<br />
[[File:Naviguer_dans_le_catalogue.png|800px|thumb|center|Fig. 9 : Naviguer dans le catalogue]]<br />
<br />
[[File:Mise_jour_du_catalogue_(1).png|800px|thumb|center|Fig. 10 : Mettre à jour le catalogue]]<br />
<br />
=Scrum=<br />
=== Sprint 1 : du 8 septembre au 21 Septembre ===<br />
* Choix du sujet<br />
* Répartition des rôles (Chef de projet et scrum master)<br />
* Répartition des tâches<br />
* Identifications des besoins utilisateurs<br />
* Ciblage du marché<br />
* Création d'un sondage<br />
<br />
=== Sprint 2 : du 22 septembre au 12 Octobre ===<br />
* Analyse des résultats du questionnaire<br />
* Liste fixes des fonctionnalités ainsi que leurs priorités <br />
* Réalisation des modèles de tâches prioritaires<br />
* Réalisation des diagrammes de classe<br />
* Analyse de la concurrence. <br />
<br />
=== Sprint 3 : du 13 Octobre au 20 Octobre ===<br />
* Création de la fiche SRS<br />
* Création des maquettes IHM (avec bootstrap)<br />
* Réalisation de la couche EJB et entités du projet(priorité 1)<br />
* Implémentation des clients lourds admin et utilisateur<br />
* Mise en place de l'IHM abstraite <br />
* Mise en place de l'architecture système (diagramme de composants et diagrammes de déploiement)</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Mise_jour_du_catalogue_(1).png&diff=24613File:Mise jour du catalogue (1).png2015-10-19T09:01:21Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Naviguer_dans_le_catalogue.png&diff=24612File:Naviguer dans le catalogue.png2015-10-19T09:01:10Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:R_cup_ration_de_la_commande.png&diff=24611File:R cup ration de la commande.png2015-10-19T08:59:32Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=24610ECOM RICM5 Groupe1 20152015-10-19T08:57:45Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br />
<br />
=Résumé du projet=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble, et est tuteuré par Didier Donsez (partie système) et Sybille Caffiau (partie IHM). Sa durée est fixée à 12 semaines.<br />
SushiWeb est une application web permettant de commander différentes gammes de sushis. Une fois la commande réalisée, il est possible de se les faire livrer à domicile ou d'aller les chercher en magasin. En ce qui concerne le paiement, l'utilisateur aura le choix entre régler par carte bancaire en ligne, ou de payer lors de la récupération de la commande.<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Romain Barthelemy: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 2 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
SushiWeb permet à l'utilisateur client de:<br />
<br />
* Créer un compte utilisateur avec login et mot de passe<br />
* Naviguer dans la catalogue<br />
* Ajouter des articles dans le panier<br />
* Gérer le panier (Supprimer des articles par exemple)<br />
* Choisir l'heure d'arrivée de la commande<br />
* Choisir le mode de réception de la commande: Livraison à domicile ou récupération de la commande en magasin<br />
* Choisir le mode de paiement: Paiement en ligne par carte bancaire ou sur place lors de la réception de la commande<br />
* Recevoir un SMS lorsque la commande est prête ou en cas de retard.<br />
<br />
SushiWeb permet à l'administrateur de:<br />
<br />
* Créer un compte Administrateur avec login et mot de passe<br />
* Modifier le catalogue: Supprimer, modifier ou ajouter des articles<br />
* Consulter les commandes <br />
* Mettre à jour l'état des commandes (exemple: commande non-traitée, commande en cours de préparation, commande prête, commandé envoyée).<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
[[File:Use_Case_Uml.jpg|500px|thumb|center|Fig. 3 : Diagramme UML des cas d'utilisation du système]]<br />
<br />
=Architecture système=<br />
<br />
[[File:Diagramme_de_composants.png|700px|thumb|center|Fig. 4 : Diagramme UML de composants]]<br />
<br />
<br />
<br />
[[File:Diagramme_de_déploiement.png|700px|thumb|center|Fig. 5 : Diagramme UML de déploiement]]<br />
<br />
=Bases de données=<br />
<br />
[[File:Diagramme_classes.jpg|800px|thumb|center|Fig. 6 : Diagramme UML de classes]]<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
[[File:Ihm_abstraite.png|1150px|thumb|center|Fig. 7 : IHM abstraite du système]]<br />
<br />
==Maquettes IHM==<br />
<br />
=Scrum=<br />
=== Sprint 1 : du 8 septembre au 21 Septembre ===<br />
* Choix du sujet<br />
* Répartition des rôles (Chef de projet et scrum master)<br />
* Répartition des tâches<br />
* Identifications des besoins utilisateurs<br />
* Ciblage du marché<br />
* Création d'un sondage<br />
<br />
=== Sprint 2 : du 22 septembre au 12 Octobre ===<br />
* Analyse des résultats du questionnaire<br />
* Liste fixes des fonctionnalités ainsi que leurs priorités <br />
* Réalisation des modèles de tâches prioritaires<br />
* Réalisation des diagrammes de classe<br />
* Analyse de la concurrence. <br />
<br />
=== Sprint 3 : du 13 Octobre au 20 Octobre ===<br />
* Création de la fiche SRS<br />
* Création des maquettes IHM (avec bootstrap)<br />
* Réalisation de la couche EJB et entités du projet(priorité 1)<br />
* Implémentation des clients lourds admin et utilisateur<br />
* Mise en place de l'IHM abstraite <br />
* Mise en place de l'architecture système (diagramme de composants et diagrammes de déploiement)</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Ihm_abstraite.png&diff=24569File:Ihm abstraite.png2015-10-19T07:40:18Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Ihm abstraite.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Ihm_abstraite.png&diff=24567File:Ihm abstraite.png2015-10-19T07:39:06Z<p>Romain.Badamo-Barthelemy: Romain.Badamo-Barthelemy uploaded a new version of &quot;File:Ihm abstraite.png&quot;</p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=24565ECOM RICM5 Groupe1 20152015-10-19T07:34:46Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br />
<br />
=Résumé du projet=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble, et est tuteuré par Didier Donsez (partie système) et Sybille Caffiau (partie IHM). Sa durée est fixée à 12 semaines.<br />
SushiWeb est une application web permettant de commander différentes gammes de sushis. Une fois la commande réalisée, il est possible de se les faire livrer à domicile ou d'aller les chercher en magasin. En ce qui concerne le paiement, l'utilisateur aura le choix entre régler par carte bancaire en ligne, ou de payer lors de la récupération de la commande.<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Romain Barthelemy: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 2 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
SushiWeb permet à l'utilisateur client de:<br />
<br />
* Créer un compte utilisateur avec login et mot de passe<br />
* Naviguer dans la catalogue<br />
* Ajouter des articles dans le panier<br />
* Gérer le panier (Supprimer des articles par exemple)<br />
* Choisir l'heure d'arrivée de la commande<br />
* Choisir le mode de réception de la commande: Livraison à domicile ou récupération de la commande en magasin<br />
* Choisir le mode de paiement: Paiement en ligne par carte bancaire ou sur place lors de la réception de la commande<br />
* Recevoir un SMS lorsque la commande est prête ou en cas de retard.<br />
<br />
SushiWeb permet à l'administrateur de:<br />
<br />
* Créer un compte Administrateur avec login et mot de passe<br />
* Modifier le catalogue: Supprimer, modifier ou ajouter des articles<br />
* Consulter les commandes <br />
* Mettre à jour l'état des commandes (exemple: commande non-traitée, commande en cours de préparation, commande prête, commandé envoyée).<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
[[File:Use_Case_Uml.jpg|500px|thumb|center|Fig. 3 : Diagramme UML des cas d'utilisation du système]]<br />
<br />
=Bases de données=<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
[[File:Ihm_abstraite.png|1500px|thumb|center|Fig. 4 : IHM abstraite du système]]<br />
<br />
==Maquettes IHM==<br />
<br />
=Scrum=<br />
=== Sprint 1 : du 8 septembre au 21 Septembre ===<br />
* Choix du sujet<br />
* Répartition des rôles (Chef de projet et scrum master)<br />
* Répartition des tâches<br />
* Identifications des besoins utilisateurs<br />
* Ciblage du marché<br />
* Création d'un sondage<br />
<br />
=== Sprint 2 : du 22 septembre au 12 Octobre ===<br />
* Analyse des résultats du questionnaire<br />
* Liste fixes des fonctionnalités ainsi que leurs priorités <br />
* Réalisation des modèles de tâches prioritaires<br />
* Réalisation des diagrammes de classe<br />
* Analyse de la concurrence. <br />
<br />
=== Sprint 3 : du 13 Octobre au 20 Octobre ===<br />
* Création de la fiche SRS<br />
* Création des maquettes IHM (avec bootstrap)<br />
* Mise en place de l'IHM abstraite <br />
* Mise en place de l'architecture système</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=File:Ihm_abstraite.png&diff=24563File:Ihm abstraite.png2015-10-19T07:32:31Z<p>Romain.Badamo-Barthelemy: </p>
<hr />
<div></div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=24558ECOM RICM5 Groupe1 20152015-10-18T18:30:35Z<p>Romain.Badamo-Barthelemy: /* L'équipe */</p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br />
<br />
=Résumé du projet=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble, et est tuteuré par Didier Donsez (partie système) et Sybille Caffiau (partie IHM). Sa durée est fixée à 12 semaines.<br />
SushiWeb est une application web permettant de commander différentes gammes de sushis. Une fois la commande réalisée, il est possible de se les faire livrer à domicile ou d'aller les chercher en magasin. En ce qui concerne le paiement, l'utilisateur aura le choix entre régler par carte bancaire en ligne, ou de payer lors de la récupération de la commande.<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Romain Barthelemy: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 2 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
SushiWeb permet à l'utilisateur client de:<br />
<br />
* Créer un compte utilisateur avec login et mot de passe<br />
* Naviguer dans la catalogue<br />
* Ajouter des articles dans le panier<br />
* Gérer le panier (Supprimer des articles par exemple)<br />
* Choisir l'heure d'arrivée de la commande<br />
* Choisir le mode de réception de la commande: Livraison à domicile ou récupération de la commande en magasin<br />
* Choisir le mode de paiement: Paiement en ligne par carte bancaire ou sur place lors de la réception de la commande<br />
* Recevoir un SMS lorsque la commande est prête ou en cas de retard.<br />
<br />
SushiWeb permet à l'administrateur de:<br />
<br />
* Créer un compte Administrateur avec login et mot de passe<br />
* Modifier le catalogue: Supprimer, modifier ou ajouter des articles<br />
* Consulter les commandes <br />
* Mettre à jour l'état des commandes (exemple: commande non-traitée, commande en cours de préparation, commande prête, commandé envoyée).<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
[[File:Use_Case_Uml.jpg|500px|thumb|center|Fig. 3 : Diagramme UML des cas d'utilisation du système]]<br />
<br />
=Bases de données=<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
<br />
==Maquettes IHM==<br />
<br />
=Scrum=<br />
=== Sprint 1 : du 8 septembre au 21 Septembre ===<br />
* Choix du sujet<br />
* Répartition des rôles (Chef de projet et scrum master)<br />
* Répartition des tâches<br />
* Identifications des besoins utilisateurs<br />
* Ciblage du marché<br />
* Création d'un sondage<br />
<br />
=== Sprint 2 : du 22 septembre au 12 Octobre ===<br />
* Analyse des résultats du questionnaire<br />
* Liste fixes des fonctionnalités ainsi que leurs priorités <br />
* Réalisation des modèles de tâches prioritaires<br />
* Réalisation des diagrammes de classe<br />
* Analyse de la concurrence. <br />
<br />
=== Sprint 3 : du 13 Octobre au 20 Octobre ===<br />
* Création de la fiche SRS<br />
* Création des maquettes IHM (avec bootstrap)<br />
* Mise en place de l'IHM abstraite <br />
* Mise en place de l'architecture système</div>Romain.Badamo-Barthelemyhttps://air.imag.fr/index.php?title=ECOM_RICM5_Groupe1_2015&diff=24543ECOM RICM5 Groupe1 20152015-10-18T13:42:23Z<p>Romain.Badamo-Barthelemy: /* Analyse de la concurrence */</p>
<hr />
<div>=Liens=<br />
Lien vers notre [[ECOM_RICM5_Groupe1_2015/SRS|fiche SRS]]<br />
<br />
=Résumé du projet=<br />
<br />
=L'équipe=<br />
Le projet E-com SushiWeb est réalisé par 4 étudiants de 5ème année en Réseaux Informatique Communication et Multimédia à Polytech'Grenoble.<br />
<br />
*Christophe Adam (Option Multimédia) <br />
*Sarah Aissanou: Scrum Master (Option Multimédia)<br />
*Barthélémy Romain: Chef de projet (Option Réseaux)<br />
*Eric Michel Fotsing (Option Réseaux)<br />
<br />
=Motivations=<br />
<br />
=Utilisateurs cibles=<br />
Après avoir mis en place un sondage concernant l'utilisation de notre site de vente de sushis, nous pouvons apporter les conclusions suivantes:<br />
<br />
*L'utilisateur cible de cette application web a entre 20 et 25 ans, mange très souvent des sushis (au moins une fois par mois) et a l'habitude de passer des commandes sur internet.<br />
<br />
*L'utilisation de notre application se fera sur ordinateur (dans un premier temps).<br />
<br />
=Contexte d'utilisation=<br />
Voici les contextes d'utilisation de notre site SushiWeb:<br />
<br />
*L'utilisateur est chez lui et souhaite passer une commande de sushis sur son ordinateur.<br />
*L'utilisateur n'est pas chez lui (chez un ami ou autre) et souhaite passer une commande de sushis sur un ordinateur.<br />
*L'utilisateur n'est pas chez lui (transports en commun ou en salle de cours) et souhaite naviguer dans le catalogue du site SushiWeb.<br />
<br />
=Analyse de la concurrence=<br />
Dans notre contexte, notre principal concurrent se trouve être le célèbre site [http://sushishop.fr Sushi Shop]. Il dispose d'une interface claire et moderne.<br />
L’accès aux différents articles et menus se fait depuis les onglets visibles en haut de chaque page du site, ce qui permet un accès rapide au catalogue à tout moment. Pour chaque produit vendu, une image y est associée, et une description détaillée est affichée lors d’un clic sur l’article, permettant un accès efficace aux informations.<br />
Le panier est affiché lors de la navigation du catalogue, permettant au client de savoir ce qu’il a pris rapidement. <br />
Concernant la commande, l’heure d’arrivée peut être choisie par l’utilisateur. Par ailleurs, les produits annexes tels que la sauce ou les baguettes sont fournies dans des quantités choisies par l’utilisateur, évitant tout gaspillage inutile. Cela permet aussi de fournir plus de ces produits annexes, moyennant une surfacturation, pour les clients ayant besoin d’un grand nombre d’entre eux.<br />
<br />
=Plateformes=<br />
<br />
D'après le sondage effectué, la plateforme la plus utilisée pour les achats en ligne est l'ordinateur.<br />
<br />
[[File:PLateforme.JPG|500px|thumb|center|Fig. 1 : Plateforme la plus utilisée]]<br />
<br />
Quant au navigateur le plus utilisé, il s'agit ici de Google chrome.<br />
<br />
[[File:Navigateur.JPG|500px|thumb|center|Fig. 1 : Navigateur web le plus utilisé]]<br />
<br />
Nous procéderons donc par priorité:<br />
*Action depuis un ordinateur sur Google Chrome (1).<br />
*Action depuis un ordinateur sur un autre navigateur web (2).<br />
*Action depuis un Smartphone (5).<br />
<br />
=Services proposés=<br />
<br />
=Tâches=<br />
Voici la liste des tâches triée par priorité:<br />
<br />
*Permettre à l'utilisateur de consulter le catalogue et de sélectionner des produits dans un panier (1)<br />
*Permettre à l'utilisateur de payer sa commande sur place (1)<br />
*Permettre à l'utilisateur d'obtenir sa commande en magasin ou à domicile (1)<br />
*Permettre au restaurateur de mettre à jour le catalogue (1)<br />
*Permettre à l'utilisateur de payer sa commande par carte bancaire (2)<br />
*Permettre au restaurateur de consulter les commandes (2)<br />
*Permettre au restaurateur de mettre à jour l'état des commandes (2)<br />
*Alerter le client par SMS/Mail lorsque la commande est prête ainsi qu'en cas de retard(3)<br />
*Permettre à l'utilisateur de créer un compte client login/mot de passe (4)<br />
*Permettre au restaurateur de créer un compte login/mot de passe (4)<br />
*Permettre à l'utilisateur de choisir l'heure de livraison de sa commande (5)<br />
<br />
=Fonctionnalités=<br />
<br />
=Bases de données=<br />
<br />
=IHM=<br />
<br />
==IHM abstraite==<br />
<br />
==Maquettes IHM==<br />
<br />
=Scrum=</div>Romain.Badamo-Barthelemy