LiveSubtitlesSRS: Difference between revisions
Line 16: | Line 16: | ||
== Product functions == |
== Product functions == |
||
The |
The app is divided into 2 parts : |
||
*The transcript by GoogleSpeech |
*The transcript by GoogleSpeech |
||
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide. |
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide. |
||
*The collaborative HMI |
*The collaborative HMI |
||
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results. |
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results. |
||
== User characteristics == |
== User characteristics == |
Revision as of 07:36, 7 April 2016
Introduction
Purpose of the requirements document
This Software Requirements Specification (SRS) identifies the requirements for project "Sign2Speech". This is an open source projet and we shall present what we did for this project in case to catch interest of new potential contributors. This document is a guideline about the functionalities offered and the problems that the system solve.
Scope of the product
RealTimeSubtiltes is an app designed to help partially deaf stutents in a classroom. The aim is to transcript a teacher speech in live and display the speech on the corresponding slide as subtitles. On the other hands, students in the classroom can correct the subtitle on a collaborative HMI. We have to use GoogleAPI Speech for the transcript, reveal.js for the slides and JavaScript. .
General Description
Product perspective
The main target of our project is to help partially deaf student to be more autonomous attending a lecture. This project is proposed by the department of disabled students at the UGA. In addition, we have to design a collaborative HMI for students to correct in real time the subtitles.
Product functions
The app is divided into 2 parts :
- The transcript by GoogleSpeech
In a first place the API must recognize the teacher speech and transcript it in real time. Final result are appended into the right place according to the current slide.
- The collaborative HMI
Designed for students, it allows logged in student to follow a course. While the teacher speech the students can either follow the courses and read the subtitles, or edit the subtitles to correct the results.
User characteristics
There are three types of users for our app
- The teacher talking while showing his slides
- The students editing notes
- The students reading the notes and the partially deaf students
Operating environment
The GoogleSpeech API works on google Chrome. A good Internet connection is required for the transcript.
General constraints
- The teacher needs to have his slides on reveal.js
- The teacher need to talk loud and not so fast
- The room has to be quiet (no noise)
- These elements can reduce errors and help the API to transcript well the speech. However, it won’t be perfect due to the instability of GoogleSpeech API.
Specific requirements, covering functional, non-functional and interface requirements
Requirement X.Y.Z (in Structured Natural Language)
Speech recognition
Description: Capture the voice and return a textual translation
Inputs: Voice of a speaker
Source: Human
Outputs: Textual data
Destination: User
Action: A speaker talk with a microphone and the system return the transcript in textual
Non functional requirements: Accurate detection of spoken words
Pre-condition: User has a microphone
Post-condition: Words are detected
Side-effects: words are not detected or wrong detection
Render the subtitles to slides
Description: Show the subtitles to the slides
Inputs: words spoken
Source: Speech recognizer
Outputs: slides with subtitles
Destination: slides
Action: : get the spoken words and show them correctly to the slides
Non functional requirements: No loss of data
Pre-condition: Spoken words are detected
Post-condition: Slides are shown with subtitles
Side-effects:Subtitles are not well shown and hide the slides. Subtitles are not readable.
Headline text
Description: Inputs: Source: Outputs: Destination: Action: Non functional requirements: Pre-condition: Post-condition: Side-effects:
Editing subtitles
Description: User can edit subtitles : add or edit words
Inputs: Wrong detected word
Source: Speech recognizer
Outputs: corrected word
Destination: shown subtitles
Action: User click on the word he wants to edit then edit it with his keyboard. User click on blank space between words to add a word.
Non functional requirements: Easy to click between words, or add a word
Pre-condition: Words are detected
Post-condition: words are added or modified
Side-effects: Removing a good word, text not well displayed.
Product Evolution
- “Real-time” windows that could show a representation of the hand that the camera is currently analyzing. It could allow the user to know if the camera is able to correctly recognize his hand. It could be done with QT Creator. Our application is not at this time really “friendly-user”.
- “2 hands” symbols that are currently not implemented in our application
- Improvements of trajectories recognition
- Language Model
- A better camera