https://air.imag.fr/api.php?action=feedcontributions&user=Tran-Quang-Tan.Bui&feedformat=atomair - User contributions [en]2024-03-28T08:11:42ZUser contributionsMediaWiki 1.35.13https://air.imag.fr/index.php?title=Projets_2015-2016&diff=29253Projets 2015-20162016-04-07T08:13:07Z<p>Tran-Quang-Tan.Bui: /* RICM */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']] - [[DashBoard-UML| '''UML''']] - [[DashBoard-SRS| '''SRS''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_finale]] - [[Media:FlyerProjet2.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [[Media:BorneInteractive2016pres.pdf|Présentation finale]]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]- [[Media:UMLLS.pdf|UML]] - [[LiveSubtitlesSRS | '''SRS''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:Real-Time-Subtitles-Report.pdf|Rapport]] - [[Media:Real-Time-Subtitles.pdf|Transparents]] - [[Media:RealTimeSubtitles-Leaflet.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] - [[UML Grenoblois Fute | '''UML''']]<br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportGrenobloisfute.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]] - [[Media:Soutenance_ZHAO_HAMMOUTI.pdf|Soutenance]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]] - [[Media:9_sources.pdf|Sources]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Rapport_final_Geoloc.pdf|Rapport]] - [[Media:Présentation_Geoloc.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.png | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:DossierAstroImage.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerAstroImage.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns'''] - [https://prezi.com/wacg-8dk6kme/astroimage '''Soutenance'''] <br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]] - [[Media:archive.zip|Code Source]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29252LiveSubtitlesSRS2016-04-07T07:40:07Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
<br />
<br />
=== Session login ===<br />
'''Description''': User has his own session<br />
<br />
'''Inputs''': User profile<br />
<br />
'''Source''': User profile<br />
<br />
'''Outputs''': A logged user<br />
<br />
'''Destination''': security manager, session control<br />
<br />
'''Action''': User click on login form and enter his login and password.<br />
<br />
'''Non functional requirements''': secured against SQL injection<br />
<br />
'''Pre-condition''': user wants to login and know his login and password<br />
<br />
'''Post-condition''': user is logged<br />
<br />
'''Side-effects''': Users are tracked by id. Users cant delete others courses<br />
<br />
= Product Evolution =<br />
<br />
*Different API Speech more efficient<br />
*Using RealTimeSubtitles in meetings/conferences<br />
<br />
<br />
= References =<br />
<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#speechreco-result<br />
*https://developers.google.com/web/updates/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API<br />
*https://openclassrooms.com/courses/des-applications-ultra-rapides-avec-node-js/socket-io-passez-au-temps-reel<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/tp-page-protegee-par-mot-de-passe<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/variables-superglobales-sessions-et-cookies<br />
*https://www.youtube.com/watch?v=o0xr1JRZOb4&index=2&list=PLLnpHn493BHFWQGA1PcyQZWAfR96a4CkH<br />
*https://atmospherejs.com<br />
*https://www.meteor.com/tutorials/blaze/adding-user-accounts<br />
*http://meteortips.com/<br />
*https://github.com/CollectionFS/Meteor-CollectionFS#installation<br />
*http://getbootstrap.com/getting-started/<br />
*http://srault95.github.io/meteor-app-base/meteor-collection-helpers/</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29251LiveSubtitlesSRS2016-04-07T07:39:26Z<p>Tran-Quang-Tan.Bui: /* Product Evolution */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
<br />
<br />
=== Session login ===<br />
'''Description''': User has his own session<br />
<br />
'''Inputs''': User profile<br />
<br />
'''Source''': User profile<br />
<br />
'''Outputs''': A logged user<br />
<br />
'''Destination''': security manager, session control<br />
<br />
'''Action''': User click on login form and enter his login and password.<br />
<br />
'''Non functional requirements''': secured against SQL injection<br />
<br />
'''Pre-condition''': user wants to login and know his login and password<br />
<br />
'''Post-condition''': user is logged<br />
<br />
'''Side-effects''': Users are tracked by id. Users cant delete others courses<br />
<br />
= Product Evolution =<br />
<br />
*Different API Speech more efficient<br />
*Using RealTimeSubtitles in meetings/conferences</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29250LiveSubtitlesSRS2016-04-07T07:39:04Z<p>Tran-Quang-Tan.Bui: /* Specific requirements, covering functional, non-functional and interface requirements */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
<br />
<br />
=== Session login ===<br />
'''Description''': User has his own session<br />
<br />
'''Inputs''': User profile<br />
<br />
'''Source''': User profile<br />
<br />
'''Outputs''': A logged user<br />
<br />
'''Destination''': security manager, session control<br />
<br />
'''Action''': User click on login form and enter his login and password.<br />
<br />
'''Non functional requirements''': secured against SQL injection<br />
<br />
'''Pre-condition''': user wants to login and know his login and password<br />
<br />
'''Post-condition''': user is logged<br />
<br />
'''Side-effects''': Users are tracked by id. Users cant delete others courses<br />
<br />
= Product Evolution =<br />
<br />
* “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.<br />
<br />
*“2 hands” symbols that are currently not implemented in our application<br />
<br />
* Improvements of trajectories recognition<br />
<br />
* Language Model<br />
<br />
*A better camera</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29246LiveSubtitlesSRS2016-04-07T07:36:09Z<p>Tran-Quang-Tan.Bui: /* Product functions */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
== Headline text ==<br />
<br />
Description: <br />
Inputs: <br />
Source: <br />
Outputs: <br />
Destination: <br />
Action:<br />
Non functional requirements: <br />
Pre-condition: <br />
Post-condition: <br />
Side-effects: <br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
= Product Evolution =<br />
<br />
* “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.<br />
<br />
*“2 hands” symbols that are currently not implemented in our application<br />
<br />
* Improvements of trajectories recognition<br />
<br />
* Language Model<br />
<br />
*A better camera</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29244LiveSubtitlesSRS2016-04-07T07:35:34Z<p>Tran-Quang-Tan.Bui: /* Learning mode */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The part have 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results. <br />
<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
== Headline text ==<br />
<br />
Description: <br />
Inputs: <br />
Source: <br />
Outputs: <br />
Destination: <br />
Action:<br />
Non functional requirements: <br />
Pre-condition: <br />
Post-condition: <br />
Side-effects: <br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
= Product Evolution =<br />
<br />
* “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.<br />
<br />
*“2 hands” symbols that are currently not implemented in our application<br />
<br />
* Improvements of trajectories recognition<br />
<br />
* Language Model<br />
<br />
*A better camera</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29242LiveSubtitlesSRS2016-04-07T07:33:42Z<p>Tran-Quang-Tan.Bui: /* Specific requirements, covering functional, non-functional and interface requirements */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The part have 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results. <br />
<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
=== Learning mode ===<br />
'''Description''': The function of this mode is to allow the user to add as many gestures (with their translations) as he wants to the dictionary<br />
<br />
'''Inputs''': Hand and finger data returned by the camera stream and the meaning of the gesture<br />
<br />
'''Source''': Intel's Real Sense camera<br />
<br />
'''Outputs''': New dictionary (JSON file) containing the new gestures and their meaning<br />
<br />
'''Destination''': Computer's memory<br />
<br />
'''Action''': The user has to select the learning mode when launching the application and enter the number of gestures that he wants to record. Guidelines are printed on the screen so that the user knows what to do and when to do it. Basically, he will have to repeat each gesture 3 times in a row (so that the program can compute the average of the 3 repeated gestures to minimize the errors). At the end of the record, the user can chooose whether he wants to add a new word (and stay in the learning mode) or not (in this case, the normal recognition mode will be activated).<br />
<br />
'''Non functional requirements''': Real-time tracking (< 1 second)<br />
<br />
'''Pre-condition''': Optimal conditions of use (good light, monochrom top that constrats with the color of the skin, no rings, no bracelets, ...). The user must also have divided his gesture into basic gestures to record.<br />
<br />
'''Post-condition''': The new entry in the dictionary must correspond to the gesture that the user intended to do<br />
<br />
'''Side-effects''': The lack of precision of the camera: if the gesture was not well recognized, the encoding in the dictionary will be wrong<br />
<br />
= Product Evolution =<br />
<br />
* “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.<br />
<br />
*“2 hands” symbols that are currently not implemented in our application<br />
<br />
* Improvements of trajectories recognition<br />
<br />
* Language Model<br />
<br />
*A better camera</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29241LiveSubtitlesSRS2016-04-07T07:33:07Z<p>Tran-Quang-Tan.Bui: Created page with "= Introduction = == Purpose of the requirements document == This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an o..."</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The part have 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results. <br />
<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
'''Inputs''': Voice of a speaker<br />
'''Source''': Human<br />
'''Outputs''': Textual data<br />
'''Destination''': User<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
'''Pre-condition''': User has a microphone<br />
'''Post-condition''': Words are detected<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
'''Description''': Show the subtitles to the slides<br />
'''Inputs''': words spoken<br />
'''Source''': Speech recognizer<br />
'''Outputs''': slides with subtitles<br />
'''Destination''': slides<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
'''Non functional requirements''': No loss of data<br />
'''Pre-condition''': Spoken words are detected<br />
'''Post-condition''': Slides are shown with subtitles<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
=== Learning mode ===<br />
'''Description''': The function of this mode is to allow the user to add as many gestures (with their translations) as he wants to the dictionary<br />
<br />
'''Inputs''': Hand and finger data returned by the camera stream and the meaning of the gesture<br />
<br />
'''Source''': Intel's Real Sense camera<br />
<br />
'''Outputs''': New dictionary (JSON file) containing the new gestures and their meaning<br />
<br />
'''Destination''': Computer's memory<br />
<br />
'''Action''': The user has to select the learning mode when launching the application and enter the number of gestures that he wants to record. Guidelines are printed on the screen so that the user knows what to do and when to do it. Basically, he will have to repeat each gesture 3 times in a row (so that the program can compute the average of the 3 repeated gestures to minimize the errors). At the end of the record, the user can chooose whether he wants to add a new word (and stay in the learning mode) or not (in this case, the normal recognition mode will be activated).<br />
<br />
'''Non functional requirements''': Real-time tracking (< 1 second)<br />
<br />
'''Pre-condition''': Optimal conditions of use (good light, monochrom top that constrats with the color of the skin, no rings, no bracelets, ...). The user must also have divided his gesture into basic gestures to record.<br />
<br />
'''Post-condition''': The new entry in the dictionary must correspond to the gesture that the user intended to do<br />
<br />
'''Side-effects''': The lack of precision of the camera: if the gesture was not well recognized, the encoding in the dictionary will be wrong<br />
<br />
= Product Evolution =<br />
<br />
* “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.<br />
<br />
*“2 hands” symbols that are currently not implemented in our application<br />
<br />
* Improvements of trajectories recognition<br />
<br />
* Language Model<br />
<br />
*A better camera</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=29239Projets 2015-20162016-04-07T07:14:44Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']] - [[DashBoard-UML| '''UML''']] - [[DashBoard-SRS| '''SRS''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_finale]] - [[Media:FlyerProjet2.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [[Media:BorneInteractive2016pres.pdf|Présentation finale]]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]- [[Media:UMLLS.pdf|UML]]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:Real-Time-Subtitles-Report.pdf|Rapport]] - [[Media:Real-Time-Subtitles.pdf|Transparents]] - [[Media:RealTimeSubtitles-Leaflet.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] - [[UML Grenoblois Fute | '''UML''']]<br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportGrenobloisfute.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]] - [[Media:Soutenance_ZHAO_HAMMOUTI.pdf|Soutenance]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]] - [[Media:9_sources.pdf|Sources]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Rapport_final_Geoloc.pdf|Rapport]] - [[Media:Présentation_Geoloc.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.png | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:DossierAstroImage.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerAstroImage.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns'''] - [https://prezi.com/wacg-8dk6kme/astroimage '''Soutenance'''] <br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]] - [[Media:archive.zip|Code Source]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=29238Projets 2015-20162016-04-07T07:13:57Z<p>Tran-Quang-Tan.Bui: /* RICM4 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']] - [[DashBoard-UML| '''UML''']] - [[DashBoard-SRS| '''SRS''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_finale]] - [[Media:FlyerProjet2.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [[Media:BorneInteractive2016pres.pdf|Présentation finale]]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:Real-Time-Subtitles-Report.pdf|Rapport]] - [[Media:UMLLS.pdf|UML]] - [[Media:Real-Time-Subtitles.pdf|Transparents]] - [[Media:RealTimeSubtitles-Leaflet.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] - [[UML Grenoblois Fute | '''UML''']]<br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportGrenobloisfute.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]] - [[Media:Soutenance_ZHAO_HAMMOUTI.pdf|Soutenance]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]] - [[Media:9_sources.pdf|Sources]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Rapport_final_Geoloc.pdf|Rapport]] - [[Media:Présentation_Geoloc.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.png | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:DossierAstroImage.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerAstroImage.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns'''] - [https://prezi.com/wacg-8dk6kme/astroimage '''Soutenance'''] <br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]] - [[Media:archive.zip|Code Source]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:RealTimeSubtitles-Leaflet.pdf&diff=29237File:RealTimeSubtitles-Leaflet.pdf2016-04-07T07:13:10Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div></div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29057LiveSubtitles2016-04-06T09:43:01Z<p>Tran-Quang-Tan.Bui: /* Project presentation */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
[[File:Subtitles1.jpg|800px|thumb|right|final project achievement]] <br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
<br />
==Technologies used==<br />
<br />
Meteor framework<br />
jQuery<br />
JavaScript<br />
HTML5<br />
CSS3<br />
Bootstrap<br />
Socket.io<br />
Mongo-db<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]] [[File:Team.png|thumb|500px|right|final project achievement]] <br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29056LiveSubtitles2016-04-06T09:35:41Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
[[File:Subtitles1.jpg|800px|thumb|right|final project achievement]] <br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]] [[File:Team.png|thumb|500px|right|final project achievement]] <br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:Team.png&diff=29055File:Team.png2016-04-06T09:22:10Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div></div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29031LiveSubtitles2016-04-06T08:07:49Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
[[File:Subtitles1.jpg|800px|thumb|right|final project achievement]] <br />
[[File:Sub2.JPG|400px|thumb|right|final project achievement]] <br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29030LiveSubtitles2016-04-06T08:07:35Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
[[File:Subtitles1.jpg|800px|thumb|right|final project achievement]] <br />
[[File:Sub2.JPG|800px|thumb|right|final project achievement]] <br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:Sub2.JPG&diff=29029File:Sub2.JPG2016-04-06T08:07:03Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div></div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29028LiveSubtitles2016-04-06T08:05:33Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
[[File:Subtitles1.jpg|800px|thumb|right|final project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:Subtitles1.jpg&diff=29027File:Subtitles1.jpg2016-04-06T08:04:54Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div></div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=29024Projets 2015-20162016-04-06T07:57:57Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_final]] - [[Media:FlyerSSSL_projet2.pdf|flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [https://docs.google.com/presentation/d/1teLj4GOT0qPPpVCVnBr1nDf-JPTqv0ZntCt2RLoBSKQ/edit?usp=sharing '''Présentation finale''']<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:Real-Time-Subtitles.pdf|Rapport]] -[[Media:UMLLS.pdf|UML]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] <br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportProjet1.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Proj-2015-2016-IndoorGeoloc/RapportProjet.pdf|Rapport]] - [[Media:Proj-2015-2016-IndoorGeoloc/TransparentsProjet.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.pdf | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:RapportProjet13.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet12.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns''']<br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:Real-Time-Subtitles.pdf&diff=29023File:Real-Time-Subtitles.pdf2016-04-06T07:57:08Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div></div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=29022Projets 2015-20162016-04-06T07:52:41Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_final]] - [[Media:FlyerSSSL_projet2.pdf|flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [https://docs.google.com/presentation/d/1teLj4GOT0qPPpVCVnBr1nDf-JPTqv0ZntCt2RLoBSKQ/edit?usp=sharing '''Présentation finale''']<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:RapportProjet1.pdf|Rapport]] -[[Media:UMLLS.pdf|UML]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] <br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportProjet1.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Proj-2015-2016-IndoorGeoloc/RapportProjet.pdf|Rapport]] - [[Media:Proj-2015-2016-IndoorGeoloc/TransparentsProjet.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.pdf | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:RapportProjet13.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet12.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns''']<br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=29021Projets 2015-20162016-04-06T07:52:00Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjetDashBoard.pdf|Rapport]] - [[Media:TransparentsDashboard.pdf|Transparents]] - [[Media:FlyerProjet1.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]] - [[Media:PresentationDashboard.pdf|Presentation]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']] - [[Projets-2015-2016-SSSL-SRS | '''SRS''']] <br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet2.pdf|Rapport]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]] - [[Media:PresentationIntermediaireProjet2.pdf|Presentation_Intermediaire]] - [[Media:PresentationFinalProjet2.pdf|Presentation_final]] - [[Media:FlyerSSSL_projet2.pdf|flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet3.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']] - [[Media:PresentationInteractiveDisplay.pdf|Présentation Intermédiaire]] - [https://docs.google.com/presentation/d/1teLj4GOT0qPPpVCVnBr1nDf-JPTqv0ZntCt2RLoBSKQ/edit?usp=sharing '''Présentation finale''']<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] - [[Projets-2015-2016-Sonotone-UML | '''UML''']]<br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjetf.pdf|Rapport]] - [[Media:SlidesSonotone.pdf|Transparents]] - [[Media:FlyerProjet3.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]] - [[Media:Soutenance.pdf|Soutenance_miparcours]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:RapportProjet1.pdf|Rapport]] -[[Media:UMLSB.pdf|UML]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet4.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, LUCIDARME<br />
| Nicolas Palix<br />
| [[GrenobleFuté| '''Fiche''']] - [[SRS - GrenobloisFuté | '''SRS''']] <br />
| [https://github.com/Lucidarme/Osmand.git '''github''']<br />
| [[Media:RapportProjet1.pdf|Rapport]] - [[Media:midPresentation.pdf|Mid Presentation]] - [[Media:Flyer GrenobloisFute(3).pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]] - [[Media:Présentation GrenobloisFuté.pdf|Transparents]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] - [[Projets-2015-2016-streaming_stereo-UML | '''UML''']]<br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:Rapport_ZHAO_HAMMOUTI.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet6.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] - [[Media:streaming.pdf|mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/legominstorm/lego '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet7.pdf|Flyer]] - [[Media:SoutenanceMiParcours-Persycup2016.pdf|Soutenance Mi-parcours]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet9.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet8.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]] - [[Media:9_Mid-Presentation.pdf|Mid Presentation]] - [[Media:9_Gantt.pdf|Gantt]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/QuentinFA/Geoloc_Indoor '''github''']<br />
| [[Media:Proj-2015-2016-IndoorGeoloc/RapportProjet.pdf|Rapport]] - [[Media:Proj-2015-2016-IndoorGeoloc/TransparentsProjet.pdf|Transparents]] - [[Media:Flyer_geoloc.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]] - [[Media:Mi_parcours.pdf|Mid presentation]] - [[Media:DESIGN_PATTERN_GEOLOC.pdf|Mid presentation]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiaye Yacine , Bruel Anna <br />
| Didier Donsez<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet111.pdf|Rapport]] - [[Media:FlyerProjetAnglais111.pdf|EnglishFlyer]] - [[Media:FlyerProjet10.pdf|FrenchFlyer]] - [[Media:soutenace111.pdf|Soutenance]] - [[Media:TransparentsProojet111.pdf|Rapport Analyste]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier111.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github'''] [[Media:Sign2Speech_2015_2015.tar.gz|'''Sign2Speech Client''']] [[Media:Sign2Speech-server_2015_2015.tar.gz|'''Sign2Speech Server''']]<br />
| [[Media:RapportProjet12_Sign2Speech_2015_2016.pdf|Rapport]] - [[Media:TransparentsProjet12_Sign2Speech_2015_2016.pdf|Transparents]] - [[Media:FlyerProjet11_Sign2Speech_2015-2016.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]] - [[Media:12-Sign2Speech-MidPres.pdf|Mid presentation]] - [[Sign2Speech_RICM4_2015-2016_User_Manual|User Manual]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.pdf | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:RapportProjet13.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet12.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns''']<br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:Projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Rapport]] - [[Media:Pr%C3%A9sentation_projet_Tachym%C3%A8tre_-_MACE_NOUGUIER_RAMEL.pdf|Transparents]] - [[Media:D%C3%A9pliant_Tachym%C3%A8tre_-_MAC%C3%89_NOUGUIER_RAMEL.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]] - [[Media:Tachymetre_Presentation.pdf | Présentation de milieu de projet]]<br />
|-<br />
<br />
!scope="row"| 15<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Media:Expose final.pdf|Rapport]] - [[Media:PresentationPorjet.pdf|Transparents Présentation]] - [[Media:Flyer_SmartProjector.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector Patterns] - [[Media:Soutenance_SP.pdf|Soutenance finale]]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Jeudi 17/03 à 13H00-17H00, salle P043 (Polytech Grenoble)puis en salle C005 (Batiment C) <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
=====Soutenances=====<br />
Planning:<br />
* Bossa (13H00-13H40 en salle P043)<br />
* Immersion EDF (13H45-14H25 en salle P043)<br />
* IaaS Docker (14H30-15H10 en salle P043)<br />
* SmartCampus (15H15-15H55 en salle P043 et salle P259 AIR)<br />
* SmartClassRoom (16H15-16H55 en C005)<br />
* Pot d' "Au Revoir" (17H00-1800 en C005)<br />
<br />
Instructions:<br />
*Chaque soutenance comporte 15 minutes de présentation, 15 minutes de démonstration et 10 minutes de questions. Un transparent doit être consacré au travail confié et réalisé par les étudiants en DUT (AVOSTI).<br />
* Répétez plusieurs fois votre présentation et votre démonstration.<br />
* L'ensemble des documents (y compris photos, vidéos et ''[[Logiciels#Screencast|screencast]]s'') doivent être accessibles depuis le tableau ci-dessous et dans chaque fiche de suivi. Prévoyez une copie sur clé USB.<br />
* Les étudiants vous accompagnent lors de votre soutenance.<br />
* '''TOUT Le matériel prêté devra être rapporté et restitué dans un sac cabas lors de la soutenance.'''<br />
<br />
=====Projets=====<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
!scope="col"| Documents<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']] - [[Projets-2015-2016-IaaS_Docker-SRS| '''SRS''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:RapportMPI_Iaas.pdf|Rapport MPI]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]] - [https://youtu.be/qtqgZNrgcRc '''Screencast''']<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']] - [[Projets-2015-2016-Portage_Bossa-SRS| '''SRS''']]<br />
| Private repository<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]] - Photos - Vidéos <br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetOpenSmartCampus2016.pdf|Rapport]] - [[Media:TransparentsProojetOpenSmartCampus2016.pdf|Transparents]] - [[Media:FlyerProjetOpenSmartCampus2016.pdf|Flyer]] - Photos - Vidéos<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']] - [[Projets-2015-2016-SmartClassRoom/SRS| '''SRS''']]<br />
| [https://github.com/vince0508/SmartClassroom-TiledDisplayPart-master_Main '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]] - [https://youtu.be/FEwoA4S9rsM '''Screencast/Vidéo''']<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:UMLLS.pdf&diff=29020File:UMLLS.pdf2016-04-06T07:51:01Z<p>Tran-Quang-Tan.Bui: Fichier Uml du projet live subtitles</p>
<hr />
<div>Fichier Uml du projet live subtitles</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27602LiveSubtitles2016-03-06T23:18:17Z<p>Tran-Quang-Tan.Bui: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27601LiveSubtitles2016-03-06T23:16:33Z<p>Tran-Quang-Tan.Bui: /* Week 7 (February 29th - March 6st) */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27600LiveSubtitles2016-03-06T23:16:14Z<p>Tran-Quang-Tan.Bui: /* Google API Speech */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27599LiveSubtitles2016-03-06T23:15:44Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27598LiveSubtitles2016-03-06T23:15:27Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|600px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=File:Live_Subtitles_half_time.jpeg&diff=27597File:Live Subtitles half time.jpeg2016-03-06T23:14:33Z<p>Tran-Quang-Tan.Bui: Achievement at half time project</p>
<hr />
<div>Achievement at half time project</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27596LiveSubtitles2016-03-06T23:13:41Z<p>Tran-Quang-Tan.Bui: /* Preambule */</p>
<hr />
<div>=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27595LiveSubtitles2016-03-06T23:08:14Z<p>Tran-Quang-Tan.Bui: /* Google API Speech */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27594LiveSubtitles2016-03-06T23:08:04Z<p>Tran-Quang-Tan.Bui: /* Specifications */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
Not supporting long speech<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27593LiveSubtitles2016-03-06T23:04:14Z<p>Tran-Quang-Tan.Bui: /* Week 3 (January 25th - January 31th) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27592LiveSubtitles2016-03-06T23:03:50Z<p>Tran-Quang-Tan.Bui: /* Week 3 (January 25th - January 31th) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27591LiveSubtitles2016-03-06T22:59:55Z<p>Tran-Quang-Tan.Bui: /* Week 6 (February 15th - February 21st) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27590LiveSubtitles2016-03-06T22:59:48Z<p>Tran-Quang-Tan.Bui: /* Week 6 (February 15th - February 21st) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27589LiveSubtitles2016-03-06T22:58:54Z<p>Tran-Quang-Tan.Bui: /* Week 7 (February 29th - February 6st) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27588LiveSubtitles2016-03-06T22:57:25Z<p>Tran-Quang-Tan.Bui: /* Project work */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
<br />
== Week 7 (February 29th - February 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27587LiveSubtitles2016-03-06T22:57:17Z<p>Tran-Quang-Tan.Bui: /* Design patterns */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
<br />
== Week 7 (February 29th - February 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27586LiveSubtitles2016-03-06T22:56:50Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
<br />
== Week 7 (February 29th - February 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
Working on adding collaboration part (javascript database?)<br />
Working the presentation<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27585LiveSubtitles2016-03-06T22:39:27Z<p>Tran-Quang-Tan.Bui: /* Links */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27584LiveSubtitles2016-03-06T22:39:02Z<p>Tran-Quang-Tan.Bui: /* Links */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
'''GitHub'''<br />
https://github.com/Lechevallier/RealTimeSubtitles<br />
<br />
<br />
'''Documents'''<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27583LiveSubtitles2016-03-06T22:38:42Z<p>Tran-Quang-Tan.Bui: /* Specifications */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
=Links=<br />
<br />
<br />
'''GitHub'''<br />
<br />
<br />
<br />
'''Documents'''<br />
<br />
<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27582LiveSubtitles2016-03-06T22:34:18Z<p>Tran-Quang-Tan.Bui: /* Project presentation */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
<br />
<br />
=Links=<br />
<br />
<br />
'''GitHub'''<br />
<br />
<br />
<br />
'''Documents'''<br />
<br />
<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27581LiveSubtitles2016-03-06T22:14:31Z<p>Tran-Quang-Tan.Bui: /* Week 2 (January 18th - January 24th) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
<br />
<br />
=Links=<br />
<br />
<br />
'''GitHub'''<br />
<br />
<br />
<br />
'''Documents'''<br />
<br />
<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=LiveSubtitles&diff=27580LiveSubtitles2016-03-06T22:11:51Z<p>Tran-Quang-Tan.Bui: /* Week 1 (January 11th - January 17th) */</p>
<hr />
<div>=Preambule=<br />
<br />
<br />
=Project presentation=<br />
<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
<br />
<br />
=Links=<br />
<br />
<br />
'''GitHub'''<br />
<br />
<br />
<br />
'''Documents'''<br />
<br />
<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API : Google recognition and Web Speech API<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
== Design patterns ==<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
<br />
<br />
== Project work ==<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socke.io<br />
<br />
<br />
<br />
=Gallery=</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=27579Projets 2015-20162016-03-06T22:10:10Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']]<br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] <br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Sous-titre_en_temps_r%C3%A9el_d%27un_cours| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[LiveSubtitles| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, Lucidarme<br />
| Nicolas Palix<br />
| [[Fiche| '''Fiche''']]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] <br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] <br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:Proj-2015-2016-IndoorGeoloc/RapportProjet.pdf|Rapport]] - [[Media:Proj-2015-2016-IndoorGeoloc/TransparentsProjet.pdf|Transparents]] - [[Media:Proj-2015-2016-IndoorGeoloc/FlyerProjet.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiyae Yacine , Bruel Anna <br />
| Didier Donsez & Jérome Maisonnasse<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.pdf | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns''']<br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]]<br />
|-<br />
<br />
!scope="row"| 16<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Rapport]] - [[Media:PresentationPorjet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector patterns]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Vendredi 18/03 à 8H30-12H30, P257 <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
Planning soutenance (à venir).<br />
* Bossa<br />
* IaaS Docker<br />
* Immersion EDF<br />
* SmartCampus<br />
* SmartClassRoom (en C005)<br />
* Pot d' "Au Revoir"<br />
<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai, Bonnard Loïc, Caperan Théo<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:Rapport_IaaS.pdf|Rapport]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]]<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']]<br />
| [https://github.com/ZenithKaizer/ '''github''']<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']]<br />
| [https://github.com/XXXX '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]]<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Sous-titre_en_temps_r%C3%A9el_d%27un_cours&diff=27578Sous-titre en temps réel d'un cours2016-03-06T22:08:34Z<p>Tran-Quang-Tan.Bui: </p>
<hr />
<div>==Objectif==<br />
*réalisation d’une application de traduction semi-automatique<br />
*IHM collaborative 2.0 pour faciliter la correction des mauvaises traductions<br />
==Les contraintes technologiques==<br />
*google speech recognition API<br />
*HTML5+ CSS + Javascript<br />
==Plan de travail==<br />
*découverte de google speech<br />
*découverte de reveal.js<br />
*scénarisation<br />
==Conseil==<br />
*ne pas laisser la créativité l’emporter sur l’atteinte d’un proto fonctionnel</div>Tran-Quang-Tan.Buihttps://air.imag.fr/index.php?title=Projets_2015-2016&diff=27577Projets 2015-20162016-03-06T22:06:03Z<p>Tran-Quang-Tan.Bui: /* Projet Semestre S8 */</p>
<hr />
<div><<[[Projets 2014-2015]] | [[Projets]] | [[Projets 2016-2017]]>><br />
=RICM=<br />
==RICM3==<br />
<br />
==RICM4==<br />
===Projet Semestre S8===<br />
<br />
Enseignants responsables : Olivier Richard, Didier Donsez<br />
<br />
<br />
* '''Evaluation à mi-parcours le lundi 7 mars''': Format: 10min (5min de présentation 3 slides au plus, 5min de discussion). Cette évaluation sera prise en compte dans la note finale.<br />
<br />
'''Consignes générales:'''<br />
<br />
* '''Vous devez être pro-actifs !!!''': Si des points sont pas ou mals spécifiés, vous le faîtes et vous justifiez vos choix. Pour les problèmes techniques éventuels vous pouvez: vous creusez la question, vous contactez l'auteur du code si il y a lieux, vous faites un rapport de bug ('''Attention:''' ca se prépare !), vous soumettez un patch, vous contactez l'enseignant ou la personne suivant le projet.<br />
<br />
* '''Vous devez maintenir une fiche de suivi de projet''': elle doit être mise à jour chaque semaine, elle rassemble les élements essentiels du projet, elle <br />
indique les évolutions du projet et présente sa feuille de route. '''Note:''' le nom de la fiche doit être composé du nom du projet et suffixé par ricm4_2015_2016.<br />
<br />
* '''Vous devez utiliser un logiciel de gestion de version''' pour vos développements comme [http://en.wikipedia.org/wiki/Git_%28software%29 git ] et nous vous conseillons d'utiliser le site [https://github.com github] pour l'hébergement de votre dépôt public.<br />
<br />
* Les document public (exemple sur github) doivent être rédigés en anglais (README, documentation, commentaires de code, nom de variables et de fonctions). Une bonnification sera accordée si le rapport et les transparents sont en anglais (la soutenance sera en francais).<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM4 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [[Dashboard pour gestionnaire de tâches et de ressources]]<br />
| CROUZET, MATHIEU<br />
| Richard<br />
| [[Projets-2015-2016-DashBoard| '''Fiche''']]<br />
| [https://github.com/MatthieuCrouzet/Projet4A '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_groupe1.pdf|Rapport Consultant]] - [[Media:Paterns.pdf|Patterns]]<br />
|-<br />
<br />
!scope="row"| 2<br />
| [[Speeding Simplified Script Language]]<br />
| POPEK, BERTRAND-DALECHAMPS, WEI<br />
| Richard<br />
| [[Projets-2015-2016-SSSL| '''Fiche''']] - [[SSSL-UML| '''UML''']]<br />
| [https://github.com/FlorianPO/Speeding-Simplified-Script-Language.git '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:Groupe2_AIR.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Borne interactive]] <br />
| DUNAND - NAVARRO - REVEL<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Borne-Interactive| '''Fiche''']] - [[Projets-2015-2016-Borne-Interactive-SRS | '''SRS''']] - [[Projets-2015-2016-Borne-Interactive/UML_Diagrams | '''UML''']]<br />
| [https://github.com/Kant73/InteractiveDisplay '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:IPopo.pdf|Rapport Consultant]] - [[Media:PatternDesign.pdf | '''Design Pattern''']]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Sonotone]]<br />
| LECORPS, VOUTAT, Hattinguais <br />
| Maisonnasse, Richard<br />
| [[Projets-2015-2016-Sonotone| '''Fiche''']] - [[Projets-2015-2016-Sonotone-SRS | '''SRS''']] <br />
| [https://github.com/Gorgorot38/Sonotone-RICM4 '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:SRS_Consultant_Sonotone_4.pdf|Rapport_Consultant]] - [[Media:pattern_sonotone.pdf|Pattern]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[LiveSubtitles| Sous-titre d'un cours en temps réel]]<br />
| LECHEVALLIER, BUI, OUNISSI <br />
| Maisonnasse<br />
| [[Fiche| '''Fiche''']]<br />
| [https://github.com/Lechevallier/RealTimeSubtitles '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media: SRS_Groupe_5.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 6<br />
| [[GrenobloisFuté]]<br />
| MOURET, DELAPORTE, Lucidarme<br />
| Nicolas Palix<br />
| [[Fiche| '''Fiche''']]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_G14.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 7<br />
| [[Streaming en stéréoscopie]]<br />
| ZHAO ZILONG, HAMMOUTI<br />
| Maisonnasse<br />
| [[Projets-2015-2016-Streaming-Stereoscopie| '''Fiche''']] - [[SRS - Streaming en stéréoscopie | '''SRS''']] <br />
| [https://github.com/zhao-zilong/streaming_stereo '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:bruel_medewou_ndiaye.pdf|Rapport_consultant]] <br />
|-<br />
<br />
!scope="row"| 8<br />
| [[PersyCup2016]]<br />
| BIN, ZEGAOUI, ELLAPIN <br />
| Donsez, Maisonnasse<br />
| [[PersyCup| '''Fiche''']]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 9<br />
| [[Services étendus pour le modèle de composants iPOPO pour Python]]<br />
| FOUNAS, HALLAL, GATTAZ <br />
| Calmant & Donsez<br />
| [[Proj-2015-2016-Extensions_IPOPO | '''Fiche''']] - [[Proj-2015-2016-Extensions_IPOPO/SRS | '''SRS''']] - [[Proj-2015-2016-Extensions_IPOPO/UML | '''UML''']] <br />
| [https://github.com/abdelazizFounas/ipopo/tree/tlsremote '''github IPOPO'''] <br /> [https://github.com/gattazr/IPOPO-Remote-Client '''github IPOPO Client''']<br />
| [[Media:9_RapportProjet.pdf|Rapport]] - [[Media:9_TransparentsProojet.pdf|Transparents]] - [[Media:9_FlyerProjet.pdf|Flyer]] - [[Media:3-SRS-Pres.pdf| Rapport Consultant]] - [[Media:9_PatternStrat.pdf|Pattern Design]]<br />
|-<br />
<br />
!scope="row"| 10<br />
| [[IndoorGeoloc2016]]<br />
| ARRADA - CRASTES - FAURE - STOIAN <br />
| Donsez<br />
| [[Proj-2015-2016-IndoorGeoloc/Fiche| '''Fiche''']] - [[Proj-2015-2016-IndoorGeoloc/SRS|SRS]]<br />
| [https://github.com/xxx '''github''']<br />
| [[Media:Proj-2015-2016-IndoorGeoloc/RapportProjet.pdf|Rapport]] - [[Media:Proj-2015-2016-IndoorGeoloc/TransparentsProjet.pdf|Transparents]] - [[Media:Proj-2015-2016-IndoorGeoloc/FlyerProjet.pdf|Flyer]] - [[Media: SRSGroupe17.pdf| Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 11<br />
| [[UPnPOpenHAB2016]]<br />
| Medewou , Ndiyae Yacine , Bruel Anna <br />
| Didier Donsez & Jérome Maisonnasse<br />
| [[Proj-Openhab-2016| '''Fiche''']] - [[Proj-2015-2016-Int%C3%A9gration_de_cam%C3%A9ra_de_surveillance_UPnP_%C3%A0_Openhab/SRS| '''SRS''']] - [[Proj-Openhab/UML| '''UML''']]<br />
| [https://github.com/openHab-UPnP '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:gl_ZHAO_HAMMOUTI.pdf|Rapport Consultant]] - [[Media:pattern_ZHAO_HAMMOUTI.pdf|Patterns]] - [[Media:fichier.pdf|Mini soutenance]]<br />
|-<br />
<br />
!scope="row"| 12<br />
| [[Sign2Speech]]<br />
| NIOGRET, NOGUERON, TITH<br />
| Didier Donsez<br />
| [[sign2speech_ricm4_2015_2016| '''Fiche''']] - [[SRS - Sign2Speech | '''SRS''']] - [[UML | '''UML''']]<br />
| [https://github.com/SignToSpeech-Project '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:12-Sign2Speech-RapportConsultant.pdf|Rapport Consultant]]<br />
|-<br />
<br />
!scope="row"| 13<br />
| [[AstroImage]] <br />
| RACHEX, BLANC, GERRY<br />
| Olivier Richard et Bruno Bzeznik<br />
| [[Proj-2015-2016-Astroimage/Fiche| '''Fiche''']] - [[AstroImage/SRS | '''SRS''']] - [[Media:AstroImage-UML.pdf | '''UML''']]<br />
| [https://github.com/nicolas-blanc/AstroImage '''github''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:13-AstroImage-RapportConsultant.pdf|Rapport Consultant]] - [https://docs.google.com/presentation/d/15F8DRktwmOuSNabdxMASniyr-TIiRzGNNG1mOhcoSnk/edit?usp=sharing '''Patterns''']<br />
|-<br />
<br />
!scope="row"| 14<br />
| [[Tachymètre]]<br />
| MACE, NOUGUIER, RAMEL<br />
| Olivier Gattaz<br />
| [[Fiche - Tachymètre | '''Fiche''']] - [[SRS - Tachymètre| '''SRS''']] - [[UML - Tachymètre| '''UML''']]<br />
| [https://github.com/Quego/Tachymetre '''github - Tachymètre''']<br />
| [[Media:RapportProjet.pdf|Rapport]] - [[Media:TransparentsProojet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:srs_tachymetre.pdf|Rapport consultant]] - [[Media:14_PatternDesign.pdf | Pattern Design]]<br />
|-<br />
<br />
!scope="row"| 16<br />
| [[SmartProjector]]<br />
| BRANGER, HABLOT<br />
| Donsez, Maisonnasse<br />
| [[Fiche_SmartProjector_ricm4_2015_2016| '''Fiche''']] - [[SRS - SmartProjector| '''SRS''']] - [[UML - SmartProjector| '''UML''']]<br />
| [https://github.com/P0ppoff/SmartProjector '''github''']<br />
| [[Rapport]] - [[Media:PresentationPorjet.pdf|Transparents]] - [[Media:FlyerProjet.pdf|Flyer]] - [[Media:Gl_groupe16.pdf|Rapport Consultant]] - [http://air.imag.fr/index.php/Patron_de_conception_-_SmartProjector patterns]<br />
|-<br />
<br />
|}<br />
<br />
===Liste de projets===<br />
<br />
* [[Dashboard pour gestionnaire de tâches et de ressources]], Olivier Richard<br />
* [[Moteur distribué d'exécution de commande]], Olivier Richard<br />
* [[Environnement d'expérimentation de pour NVIDIA Shield (Tegra X1)]], Olivier Richard <br />
* [[Speeding Simplified Script Language]], Olivier Richard<br />
<br />
* Aide (Open-Source)au Handicap Auditif, avec Didier Donsez, Jérome Maisonnasse, Marie-Paule Balicco (SAH UGA) et Nicolas Vuillerme<br />
** [[Borne interactive]] (1 sujet)<br />
** [[Sonotone]] (1 sujet)<br />
** [[Sous-titre en temps réel d'un cours]] (1 sujet)<br />
* [[GrenobloisFuté]] Couche trafic sur OsmAnd avec un greffon. Données dynamique de la métro. Dvp Android. Nicolas Palix.<br />
* [[GeoDiff]] Production, visualisation, fusion de variations (diff) sur de l'information géocodée : Nicolas Palix<br />
* [[Smart campus augmenté et contributif]] Didier Donsez, Vivien Quema<br />
<br />
* [[Streaming en stéréoscopie]] sur [[WebRTC]] avec rendu sur [[Oculus]] pour le robot [[RobAIR]], Jérôme Maisonnasse. ([http://gstconf.ubicast.tv/videos/stereoscopic-3d-video/ voir]).<br />
* [[STM32F7]] : Mise en oeuvre de la chaîne de compilation sous Linux avec [[OpenSTM32]] et [[OpenOCD]]. Nicolas Palix<br />
* [[PersyCup2016]] : Persyval Robocup, Didier Donsez, Vivien Quema, Jérome Maisonnasse. (3 étudiants)<br />
* [[Services étendus pour le modèle de composants iPOPO pour Python]], Didier Donsez & Thomas Calmant. (2 étudiants)<br />
* [[SmartClassRoom2016|Développement d'une interface partagée pour tables tactiles (projet SmartClassRoom)]], Didier Donsez, Jérôme Maisonnasse. (2 étudiants)<br />
* [[iRock2016|iRock : surveillance de glissement de terrains]], Didier Donsez & Vivien Quema<br />
* [[IndoorGeoloc2016|Géolocalisation in-door au moyen de balises (beacon) BLE et Wifi à base de STM32 et de balises iBeacon & AltBeacon]], Didier Donsez & Vivien Quema<br />
* [[UPnPOpenHAB2016|Intégration et gestion de caméras de surveillance UPnP dans la plateforme domotique open-source OpenHAB et myOpenHAB]], Didier Donsez & Jérome Maisonnasse.<br />
<br />
'''Projets non prioritaires'''<br />
<br />
* [[Liveprogramming with Kivy]], Olivier Richard<br />
* [[AstroImage]] production d'image d'astronomie, Olivier Richard et Bruno Bzeznik<br />
* [[G-code Cruncher]] Controle de machine CNC (Nucleo grbl + esp8266 + Sdcard), Olivier Richard<br />
* [[Intégration OpenHAB / OpenTele]] Nicolas Palix<br />
<br />
==RICM5==<br />
<br />
===Projet Semestre S10===<br />
<br />
Enseignant responsable : Didier Donsez<br />
<br />
Démarrage : Lundi 25/01 à 10H30-12H30, P253 (Rendez-vous devant la salle AIR) - Visioconf pour Thibaut Cordier<br />
<br />
Soutenance : Vendredi 18/03 à 8H30-12H30, P257 <br />
<br />
Etudiants : RICM5 + 8 étudiants Avosti DUT RT<br />
<br />
Rappel séances MPI<br />
* Séance 1 : mardi 26 janvier après midi - Stéphanie Diligent<br />
* Séance 2 : mardi 2 février après midi - Stéphanie Diligent<br />
* Séance 3 : lundi 8 février matin - Emmanuelle Tréhoust<br />
* Séance 4 : jeudi 11 février matin - Emmanuelle Tréhoust<br />
* Séance 5 : lundi 21 mars matin - Stéphanie Diligent et Emmanuelle Tréhoust<br />
<br />
Planning soutenance (à venir).<br />
* Bossa<br />
* IaaS Docker<br />
* Immersion EDF<br />
* SmartCampus<br />
* SmartClassRoom (en C005)<br />
* Pot d' "Au Revoir"<br />
<br />
<br />
{|class="wikitable alternance"<br />
|+ Affectation des projets RICM5 2015-2016<br />
|-<br />
|<br />
!scope="col"| Sujet<br />
!scope="col"| Etudiants<br />
!scope="col"| Enseignant(s)<br />
!scope="col"| Fiche de suivi<br />
!scope="col"| Dépot git<br />
|-<br />
<br />
!scope="row"| 1<br />
| [http://air.imag.fr/index.php/IaaS_collaboratif_avec_Docker IaaS - Docker]<br />
| Eudes Robin, Damotte Alan, Barthelemy Romain, Mammar Malek, Guo Kai, Bonnard Loïc, Caperan Théo<br />
| Didier Donsez<br />
| [[Projets-2015-2016-IaaS_Docker| '''Fiche''']]<br />
| [https://github.com/EudesRobin/iaas-collaboratif '''github''']<br />
| [[Media:Rapport_IaaS.pdf|Rapport]] - [[Media:Transparents_IaaS.pdf|Transparents]] - [[Media:Flyer_IaaS.pdf|Flyer]]<br />
|-<br />
!scope="row"| 2<br />
| [http://air.imag.fr/index.php/Portage_de_Bossa Portage de Bossa sur le Kernel Linux 4x]<br />
| Eric Michel Fotsing, Ombeline Rossi, Longfei Yao<br />
| Nicolas Palix, Didier Donsez<br />
| [[Projets-2015-2016-Portage_Bossa| '''Fiche''']]<br />
| [https://github.com/ZenithKaizer/ '''github''']<br />
| [[Media:Rapport_Bossa.pdf|Rapport]] - [[Media:Transparents_Bossa.pdf|Transparents]] - [[Media:Flyer_Bossa.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 3<br />
| [[Visite immersive en réalité virtuelle dans une usine avec EDF]]<br />
| Adam Christophe, Aissanou Sarah, Klipffel Tararaina, Qian Jean, Zominy Laurent<br />
| Didier Donsez, Georges-Pierre Bonneau, Thibaut Cordier (EDF)<br />
| [[Projets-2015-2016-VisiteImmersiveEDF| '''Fiche''']]<br />
| [https://github.com/VisiteImmersiveEDF '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 4<br />
| [[Contribution à OpenSmartCampus]] (voir http://data.beta.metropolegrenoble.fr/)<br />
| Quentin Torck, Vivien Michel, Jérémy Hammerer, Rama Codazzi, Zhengmeng Zhang<br />
| Didier Donsez, Vivien Quéma<br />
| [[Projets-2015-2016-OpenSmartCampus| '''Fiche''']]<br />
| [https://github.com/quentin74/SmartCampus.git '''github''']<br />
| [[Media:RapportProjetX.pdf|Rapport]] - [[Media:TransparentsProojetX.pdf|Transparents]] - [[Media:FlyerProjetX.pdf|Flyer]]<br />
|-<br />
<br />
!scope="row"| 5<br />
| [[Contribution à SmartClassRoom]] (Interfaces tactiles distribuées et partagées)<br />
| Saussac Thibault, Toussaint Sébastien, Hamdani Youcef, Zoppello Sebastien, Melik sak, Mesnier Vincent<br />
| Jérôme Maisonnasse, Didier Donsez<br />
| [[Projets-2015-2016-SmartClassRoom| '''Fiche''']]<br />
| [https://github.com/XXXX '''github''']<br />
| [[Media:RapportProjetSmartClassRoom.pdf|Rapport]] - [[Media:TransparentsProjetSmartClassRoom.pdf|Transparents]] - [[Media:FlyerProjetSmartClassRoom.pdf|Flyer]]<br />
|-<br />
<br />
<br />
|}<br />
<br />
===Projets annulés et reportés===<br />
* Projet avec [[Tango Project]] (Annulé)<br />
* Hack the Beam, Didier Donsez & Jérôme Maisonnasse.<br />
* [[Algorithmes de suivi de personnes pour robot de téléprésence RobAIR]] (Jérôme Maisonnasse, Didier Donsez)<br />
<br />
=M2PGI=<br />
==[[Projets M2PGI Services Machine-to-Machine|Projet Services Machine-to-Machine]]==<br />
* [[PM2M/2016/TP|Sujet et groupes]]</div>Tran-Quang-Tan.Bui