Projets-2015-2016-Borne-Interactive-SRS
Version | Date | Authors | Description | Validator | Validation Date | |
---|---|---|---|---|---|---|
0.1.0 | January 18, 2016 | Quentin DUNAND - Elsa NAVARRO - Antoine REVEL | Creation of the document | TBC | TBC |
1. Introduction
1.1 Purpose of the requirements document
This Software Requirements Specification (SRS) identifies the requirements for the project "Borne Interactive". In case of a open source project, we must present the requirement to others potential contributors. This document is a guideline about the functionalities offered and the problems that the system solves.
1.2 Scope of the product
Intended for any organization wishing to facilitate the reception of hearing impaired-people. The goal is not to substitute the original speech but improve it, and it doesn't provide any way to answer back.
1.3 Definitions, acronyms and abbreviations
- "Borne Interactive" : Interactive display on two screens or two interfaces : one for the speaker and one for the listener.
- API : Application Programming Interface. It's a set of routines, protocols, and tools for building software and applications generally focused on certain tasks like extracting information about specific data or in our case to have access to speech recognition methods.
- Web Speech API : The API made by Google to implement speech recognition in our application.
- Google Chrome : The main browser used to run the display.
1.4 References
Web speech API : https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html
1.5 Overview of the remainder of the document
General description of the project followed by all its requirements and the product evolution.
2. General description
The aim of this project is to design a new interactive system to help hearing-impaired people, by writing what the receptionnist is saying. It can also help people with concentration problems by providing a way to keep track of the conversation.
2.1 Product perspective
When deaf(or hearing-impaired) people come at a reception stand, to request informations for example, they can have difficulties to understand the receptionist. Our product intends to make a live transcription of the receptionist speech and write it down on a screen. The person understands the receptionist in a better way, especially in a noisy environnment where it's hard to communicate. Since voice recognition make mistakes and doesn't recognise technical terms, our product also offer a mean of correction for the receptionist. It can also be used to help people with concentration issues who can have troubles keeping track of a conversation.
2.2 Product functions
Live transcription of the receptionist voice. Possibilities:
- Correcting the transcription (for mistakes or technical terms).
- Conversation history can be printed.
- Simple formatting with buttons
2.3 User characteristics
Two types of users, the receptionist and the person with listening issues. The receptionist can correct the live transcription via his interface. He can print the transcript of the conversation. The welcomed person read the live transcription in an easy way.
2.4 General constraints
Only made using Web technologies such as HTML/PHP/JavaScript to make it run on almost every device. It must use the Web Speech Recognition API in order to see its limitations and possibilities.
2.5 Assumptions and dependencies
3.Specific requirements, covering functional, non-functional and interface requirements
Non-functional requirements
Network constraint: To use the application an internet connection fast enough to handle the Web Speech API is necessary. It's the only thing that connect to the internet. The rest is stricly local.
System requirement: The application can run on all device capable of handling a modern browser, at least two tabs, a very simple server (python simple server in our tests) and of course an internet connection.
Functional requirements
Live transcription : Transcription of the speech pronounced by the person in charge of welcoming the people with listening issues.
Correction : Necessary to reduce the impacts of eventual transcription errors.
3.1 Requirement X.Y.Z (in Structured Natural Language)
Function : Live transcription
Description: When the receptionist speak, a live transcription can be seen by both users.
Inputs: Voice command or button press to start the transcription.
Source: Mouse and microphone.
Outputs: Screen (possibly touchscreen)
Destination: This device is designed to be used to welcome people with listening issues (University, Post office...).
Action: note : Natural language sentences (with MUST, MAY, SHALL).
Non functional requirements:
Pre-condition:
Post-condition:
Side-effects:
4. Product evolution
The very first version was only a web page that showed the text recognised by the API. We then made a version where the two tabs where present and showing the synchronisation between the two. There was also an early version of the interface with some buttons to add simple formatting to the text. Then we made the last version. The interface is clearer. The client can ask for help to launch the interface. The receptionnist can print the transcript, correct mistakes.
5. Appendices
W3C Web Speech API Specifications :
https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html