Difference between revisions of "Projets-2015-2016-Borne-Interactive"

From air
Jump to navigation Jump to search
Line 131: Line 131:
 
* The API only returns the possible sentence at the moment you speak. For example for the next sentences, the API will return the first one, and you can access to the second one, but there has to be a treatment to detect that "William" matches with "I am" and "speaking" to "speak in". This problem makes the insertion of a list with other likely words for each word quite complicated.
 
* The API only returns the possible sentence at the moment you speak. For example for the next sentences, the API will return the first one, and you can access to the second one, but there has to be a treatment to detect that "William" matches with "I am" and "speaking" to "speak in". This problem makes the insertion of a list with other likely words for each word quite complicated.
   
Ex : "I am speaking to a computer"
+
"I am speaking to a computer"
 
"William speak in to a computer"
 
"William speak in to a computer"
  +
 
* Vocal modification of a word: little importance due to the semantic distance that is way bigger. This task
 
* Vocal modification of a word: little importance due to the semantic distance that is way bigger. This task
   

Revision as of 10:06, 4 April 2016

Subject: Borne interactive - Interactive Terminal

Supervisors:

  • Jérôme Maisonasse

Students:

  • Quentin DUNAND (RICM4)
  • Elsa NAVARRO (RICM4)
  • Antoine REVEL (RICM4)


Subject summary

Interactive terminal to welcome deaf or hearing impaired people. Transcription of speech into text. Allows deaf people to understand better the reception staff.

Week 1 (January 11th - January 17th)

Objectives

  • Discover the project.
  • Contact Jérôme Maisonasse for more details.

Work done

  • Project chose.

Problems faced

  • None so far

Week 2 (January 18th - January 24th)

Objectives

  • Set up of logbook
  • Precise the project ideas.
  • List all the Web Speech API features.

Work done

  • Project requirements (SRS).
  • Brainstorming about project's goal.

Problems faced

  • Define the features of the software : Does it has to provide a way to answer for the deaf person? => No
  • Hardware choice (touchscreen or keyboard & mouse).

Week 3 (January 25th - January 31st)

Objectives

  • Settle a first draft using Web Speech API.

Work done

Problems faced

  • Getting to know Web Speech API.

Week 4 (February 1st - February 7th)

Objectives

  • Make an appointment with project tutors to show the first prototypes.

Work done

First prototypes:

  • Communication between two windows : simple recognition with start/stop and punctuation buttons.
  • Use of Web Speech API features

Problems faced

  • How to communicate between two windows.

Week 5 (February 8th - February 14th)

Objectives

  • Write appointment debrief.
  • Note down all the mentioned functionalities.

Work done

  • Appointment with project tutors to talk about the UI and the functionalities the application should offer. First demo.
  • Design patterns.
  • Assistance provided to the group working on the project "Sous-titre en temps réel d'un cours".

Problems faced

  • Long term listening of the API.

Week 6 (February 15th - February 21st)

Objectives

  • Final sprint backlog (Item product backlog, Task, State => Slide 50 Scrum) => Sprint backlog meeting

Work done

Problems faced

  • Set the correct priority for every task.

Week 7 (February 29th - March 6th)

Objectives

  • Begin to develop next features.
  • Check out which tasks are limited by the API.

Work done

  • Preparation for presentation.
  • Choice for next tasks to complete.
  • Start to develop a word's modification.

Problems faced

Seems hard to realize due to the API limits :

  • Autocompletion,
  • Modification list,
  • Vocal modification of a word,
  • Addition of specific words.

Week 8 (March 7th - March 13th)

Objectives

  • Make an appointment with the project tutor.

Work done

  • Presentation with Didier Donsez and Olivier Richard.
  • Developing of the modification of a word with keyboard.
  • Reflection on administrator interface.
  • Thoughts about the final UI.

Problems faced

  • Using a database to add words to understand is not compatible with the API (Specific words can't be added in the comprehension, and so admin interface is not intersting anymore; and the autocompletion of a word is not possible the same way, because it would need a database).
  • The API only returns the possible sentence at the moment you speak. For example for the next sentences, the API will return the first one, and you can access to the second one, but there has to be a treatment to detect that "William" matches with "I am" and "speaking" to "speak in". This problem makes the insertion of a list with other likely words for each word quite complicated.
    "I am speaking to a computer"
    "William speak in to a computer"
  • Vocal modification of a word: little importance due to the semantic distance that is way bigger. This task

Week 9 (March 14th - March 20th)

Objectives

Implement printing option, + other tasks ... (TO complete)

Work done

Make an appointment with Jérôme Maisonasse. Complete SRS file.

Problems faced

Week 10 (March 21st - March 27th)

Objectives

Work done

. Impression option developed


Problems faced

Week 11 (March 28th - April 3rd)

Objectives

Work done

Problems faced