https://air.imag.fr/api.php?action=feedcontributions&user=Maxime.Lechevallier&feedformat=atomair - User contributions [en]2024-03-29T12:48:55ZUser contributionsMediaWiki 1.35.13https://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29261LiveSubtitlesSRS2016-04-07T11:13:21Z<p>Maxime.Lechevallier: /* Render the subtitles to slides */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "RealTimeSubtitles". This is a guideline about features offered and problems that we will have to solve. It is an open source project loaded on Github, the code is well organized to allow review by us or by new potential contributors.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : Get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''': Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
<br />
<br />
=== Session login ===<br />
'''Description''': User has his own session<br />
<br />
'''Inputs''': User profile<br />
<br />
'''Source''': User profile<br />
<br />
'''Outputs''': A logged user<br />
<br />
'''Destination''': security manager, session control<br />
<br />
'''Action''': User click on login form and enter his login and password.<br />
<br />
'''Non functional requirements''': secured against SQL injection<br />
<br />
'''Pre-condition''': user wants to login and know his login and password<br />
<br />
'''Post-condition''': user is logged<br />
<br />
'''Side-effects''': Users are tracked by id. Users cant delete others courses<br />
<br />
= Product Evolution =<br />
<br />
*Different API Speech more efficient<br />
*Using RealTimeSubtitles in meetings/conferences<br />
<br />
<br />
= References =<br />
<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#speechreco-result<br />
*https://developers.google.com/web/updates/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API<br />
*https://openclassrooms.com/courses/des-applications-ultra-rapides-avec-node-js/socket-io-passez-au-temps-reel<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/tp-page-protegee-par-mot-de-passe<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/variables-superglobales-sessions-et-cookies<br />
*https://www.youtube.com/watch?v=o0xr1JRZOb4&index=2&list=PLLnpHn493BHFWQGA1PcyQZWAfR96a4CkH<br />
*https://atmospherejs.com<br />
*https://www.meteor.com/tutorials/blaze/adding-user-accounts<br />
*http://meteortips.com/<br />
*https://github.com/CollectionFS/Meteor-CollectionFS#installation<br />
*http://getbootstrap.com/getting-started/<br />
*http://srault95.github.io/meteor-app-base/meteor-collection-helpers/</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitlesSRS&diff=29260LiveSubtitlesSRS2016-04-07T11:10:47Z<p>Maxime.Lechevallier: /* Purpose of the requirements document */</p>
<hr />
<div>= Introduction =<br />
<br />
== Purpose of the requirements document ==<br />
This Software Requirements Specification (SRS) identifies the requirements for project "RealTimeSubtitles". This is a guideline about features offered and problems that we will have to solve. It is an open source project loaded on Github, the code is well organized to allow review by us or by new potential contributors.<br />
<br />
== Scope of the product ==<br />
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .<br />
<br />
<br />
= General Description=<br />
== Product perspective ==<br />
<br />
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles. <br />
<br />
<br />
== Product functions ==<br />
<br />
The app is divided into 2 parts : <br />
*The transcript by GoogleSpeech<br />
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.<br />
*The collaborative HMI <br />
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.<br />
<br />
== User characteristics ==<br />
<br />
There are three types of users for our app<br />
*The teacher talking while showing his slides<br />
*The students editing notes<br />
*The students reading the notes and the partially deaf students <br />
<br />
<br />
== Operating environment ==<br />
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.<br />
<br />
== General constraints ==<br />
*The teacher needs to have his slides on reveal.js<br />
*The teacher need to talk loud and not so fast<br />
*The room has to be quiet (no noise)<br />
*These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.<br />
<br />
<br />
= Specific requirements, covering functional, non-functional and interface requirements =<br />
<br />
== Requirement X.Y.Z (in Structured Natural Language) ==<br />
<br />
=== Speech recognition ===<br />
<br />
'''Description''': Capture the voice and return a textual translation<br />
<br />
'''Inputs''': Voice of a speaker<br />
<br />
'''Source''': Human<br />
<br />
'''Outputs''': Textual data<br />
<br />
'''Destination''': User<br />
<br />
'''Action''': A speaker talk with a microphone and the system return the transcript in textual<br />
<br />
'''Non functional requirements''': Accurate detection of spoken words<br />
<br />
'''Pre-condition''': User has a microphone<br />
<br />
'''Post-condition''': Words are detected<br />
<br />
'''Side-effects''': words are not detected or wrong detection<br />
<br />
<br />
<br />
=== Render the subtitles to slides ===<br />
<br />
'''Description''': Show the subtitles to the slides<br />
<br />
'''Inputs''': words spoken<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': slides with subtitles<br />
<br />
'''Destination''': slides<br />
<br />
'''Action''': : get the spoken words and show them correctly to the slides<br />
<br />
'''Non functional requirements''': No loss of data<br />
<br />
'''Pre-condition''': Spoken words are detected<br />
<br />
'''Post-condition''': Slides are shown with subtitles<br />
<br />
'''Side-effects''':Subtitles are not well shown and hide the slides. Subtitles are not readable.<br />
<br />
<br />
<br />
=== Editing subtitles ===<br />
'''Description''': User can edit subtitles : add or edit words<br />
<br />
'''Inputs''': Wrong detected word<br />
<br />
'''Source''': Speech recognizer<br />
<br />
'''Outputs''': corrected word<br />
<br />
'''Destination''': shown subtitles<br />
<br />
'''Action''': User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.<br />
<br />
'''Non functional requirements''': Easy to click between words, or add a word<br />
<br />
'''Pre-condition''': Words are detected<br />
<br />
'''Post-condition''': words are added or modified<br />
<br />
'''Side-effects''': Removing a good word, text not well displayed.<br />
<br />
<br />
<br />
=== Session login ===<br />
'''Description''': User has his own session<br />
<br />
'''Inputs''': User profile<br />
<br />
'''Source''': User profile<br />
<br />
'''Outputs''': A logged user<br />
<br />
'''Destination''': security manager, session control<br />
<br />
'''Action''': User click on login form and enter his login and password.<br />
<br />
'''Non functional requirements''': secured against SQL injection<br />
<br />
'''Pre-condition''': user wants to login and know his login and password<br />
<br />
'''Post-condition''': user is logged<br />
<br />
'''Side-effects''': Users are tracked by id. Users cant delete others courses<br />
<br />
= Product Evolution =<br />
<br />
*Different API Speech more efficient<br />
*Using RealTimeSubtitles in meetings/conferences<br />
<br />
<br />
= References =<br />
<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#speechreco-result<br />
*https://developers.google.com/web/updates/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API<br />
*https://openclassrooms.com/courses/des-applications-ultra-rapides-avec-node-js/socket-io-passez-au-temps-reel<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/tp-page-protegee-par-mot-de-passe<br />
*https://openclassrooms.com/courses/concevez-votre-site-web-avec-php-et-mysql/variables-superglobales-sessions-et-cookies<br />
*https://www.youtube.com/watch?v=o0xr1JRZOb4&index=2&list=PLLnpHn493BHFWQGA1PcyQZWAfR96a4CkH<br />
*https://atmospherejs.com<br />
*https://www.meteor.com/tutorials/blaze/adding-user-accounts<br />
*http://meteortips.com/<br />
*https://github.com/CollectionFS/Meteor-CollectionFS#installation<br />
*http://getbootstrap.com/getting-started/<br />
*http://srault95.github.io/meteor-app-base/meteor-collection-helpers/</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29011LiveSubtitles2016-04-06T07:15:02Z<p>Maxime.Lechevallier: /* Week 12 (April 4st - April 6st) */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
*Establishment of a new tree<br />
<br />
*Learning and development router to navigate between pages<br />
<br />
*Learning and use of Bootstrap 3<br />
<br />
*Adding API Google Speech<br />
<br />
*Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
*Establishment of the collaborative part algorithm<br />
<br />
*Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
*Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29010LiveSubtitles2016-04-06T07:14:28Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Implementation of the Reveal package<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
== Week 12 (April 4st - April 6st) ==<br />
<br />
Establishment of a new tree<br />
<br />
Learning and development router to navigate between pages<br />
<br />
Learning and use of Bootstrap 3<br />
<br />
Adding API Google Speech<br />
<br />
Adding note beside Reveal slides in two mode: Edit and Read<br />
<br />
Establishment of the collaborative part algorithm<br />
<br />
Establishment of use restriction depending on whether the user is teacher or student<br />
<br />
Retail, konami code, fun and joy<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29009LiveSubtitles2016-04-06T06:56:59Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
==== Features added on the client side: ====<br />
*Add/remove a course<br />
*Login<br />
==== Features added on the server side: ====<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
==== Features added on the client side: ====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29008LiveSubtitles2016-04-06T06:56:28Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
Features added on the client side:<br />
*Add/remove a course<br />
*Login<br />
Features added on the server side:<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
===== Features added on the client side: =====<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29007LiveSubtitles2016-04-06T06:56:09Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
Features added on the client side:<br />
*Add/remove a course<br />
*Login<br />
Features added on the server side:<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
=== Features added on the client side: ===<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
==== Features added on the server side: ====<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29006LiveSubtitles2016-04-06T06:55:33Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
For more security, decision to implemente all functions that modify the database in the server side<br />
<br />
Features added on the client side:<br />
*Add/remove a course<br />
*Login<br />
Features added on the server side:<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
Establishment of the final data structure which is composed of several MongoDB collections:<br />
*Courses Collection<br />
*Slides Collection<br />
*Words Collection<br />
<br />
Features added on the client side:<br />
*UI of adding a word or an option to a word in the note part thanks to mouse events<br />
Features added on the server side:<br />
*Insert slide data<br />
*Insert word data on a specific position in the note<br />
*Add a word option to a specific word<br />
*Increment number of course's listener<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=29001LiveSubtitles2016-04-06T06:42:15Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
<br />
Try to implemente Session in php<br />
<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
Decision to switch to a Meteor projet<br />
<br />
Learning of the Meteor framework with tutoriel on pdf and youtube<br />
<br />
== Week 10 (March 21st - March 27st) ==<br />
<br />
Beginning of the implementation of our projet under the framework Meteor<br />
<br />
Features added on the client side:<br />
*Add/remove a course<br />
*Login<br />
Features added on the server side:<br />
*Insert course data<br />
*Remove course data<br />
<br />
== Week 11 (April 28st - April 3st) ==<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=28998LiveSubtitles2016-04-06T06:32:16Z<p>Maxime.Lechevallier: /* Progress of the project */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
Try to implemente Session in php<br />
Searching for an easy way to store our data (which structure and which technologie)<br />
Begin to implement our projet according to the model view controller Model<br />
<br />
== Week 9 (March 14st - March 20st) ==<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=LiveSubtitles&diff=28996LiveSubtitles2016-04-06T06:28:36Z<p>Maxime.Lechevallier: /* Week 8 (March 7st - March 13st) */</p>
<hr />
<div>[[File:Live_Subtitles_half_time.jpeg|800px|thumb|right|Half time project achievement]] <br />
=Project presentation=<br />
Transcribe a teacher speech to subtitles and allow students to correct misinterpreted words<br />
<br />
= Team =<br />
<br />
* Supervisors : Jérôme Maisonnasse<br />
<br />
* Members : BUI David / LECHEVALLIER Maxime / OUNISSI Sara<br />
<br />
* Departement : [http://www.polytech-grenoble.fr/ricm.html RICM 4], [[Polytech Grenoble]]<br />
<br />
<br />
=Specifications=<br />
Make an app usable in any browser (mainly Google Chrome)<br />
<br />
<br />
===Google API Speech ===<br />
Key words : new paragraph, comma, dot<br />
<br />
Not supporting long speech (over 2 minutes), have to reboot after that<br />
<br />
=Links=<br />
<br />
<br />
[https://github.com/Lechevallier/RealTimeSubtitles GitHub]<br />
<br />
<br />
'''Documents'''<br />
API specs :<br />
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
<br />
= Progress of the project =<br />
<br />
The project started January 11th, 2015.<br />
<br />
== Week 1 (January 11th - January 17th) ==<br />
''First interview with our supervisor Jérôme. We've learned more about our project and what is expected for the next weeks''<br />
<br />
*Handling the project<br />
*testing Google API Speech<br />
*Making git repository<br />
<br />
== Week 2 (January 18th - January 24th) ==<br />
<br />
*Going further into the tests of the API<br />
*https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html<br />
*There are multiples API :<strike> Google recognition and Web Speech API</strike> It is the same API developped by Google<br />
<br />
== Week 3 (January 25th - January 31th) ==<br />
*Microphone works only when a virtual server is installed, we try with apache (Lamp/Xamp)<br />
*Learning JavaScript<br />
*Learning HTML/CSS<br />
*Trying Bootstrap<br />
*Amara.org is a website to edit youtube subtitles, might help<br />
<br />
== Week 4 (February 1st - February 7th) ==<br />
*Scrum<br />
*Trello<br />
*Trying to add grammar and key-words (like "OK Google") => Not possible<br />
<br />
== Week 5 (February 08th - February 14th) ==<br />
<br />
<br />
=== Design patterns ===<br />
<br />
* Model-View-Controller (GoF) : This pattern is used to separate application's concerns. Our project is Web oriented program<br />
* Singleton (GoF) : Ensure a class has only one instance, and provide a global point of access to it. <br />
Example : a teacher is the only one who can launch slides<br />
* Visitor (GoF) : Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. <br />
Example : students can edit the subtitles<br />
* State (GoF) : Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. <br />
Example : Microphone detection<br />
* Service Contract - Concurrent Contracts (SOA) : http://soapatterns.org/design_patterns/concurrent_contracts<br />
<br />
=== Project work ===<br />
<br />
Solving critical problems : the API is not working with ambient noise. When we are talking directly to the microphone the API is working fine.<br />
<br />
Tests :<br />
*Fast talking : Dead after 1 minute<br />
*Slow talking (with interruptions) with music arround : Dead after 2 minutes<br />
*Slow talking : Dead after 2 minutes<br />
<br />
Meeting with Jérôme to have new directions after a quick demo of the app.<br />
<br />
== Week 6 (February 15th - February 21st) ==<br />
<br />
Studying Socket.io, trying the demo chat, linking Reveal.js with socket.io<br />
<br />
WebStorm is a Javascript IDE but too complicated too use for us<br />
<br />
== Week 7 (February 29th - March 6st) ==<br />
<br />
Transmitting data from client to server with socket.io<br />
<br />
Working on adding collaboration part (javascript database?)<br />
<br />
Working the presentation<br />
<br />
== Week 8 (March 7st - March 13st) ==<br />
* Try to implemente Session in php<br />
* Searching for an easy way to store our data (which structure and which technologie)<br />
<br />
=Gallery=</div>Maxime.Lechevallierhttps://air.imag.fr/index.php?title=Fiche&diff=25710Fiche2016-01-18T10:50:07Z<p>Maxime.Lechevallier: Created page with "== Exigences fonctionnelles == Reconnaissance vocale et retranscription sous forme de note Correction textuelle en temps réél par un correcteur Interaction avec les diapos..."</p>
<hr />
<div>== Exigences fonctionnelles ==<br />
Reconnaissance vocale et retranscription sous forme de note<br />
<br />
Correction textuelle en temps réél par un correcteur<br />
<br />
Interaction avec les diapositives reveal.js</div>Maxime.Lechevallier