Proj-2014-2015-SmartClassRoom/SRS

From air
Jump to navigation Jump to search

The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.


Document History
Version Date Authors Description Validator Validation Date
0.1.0


0.1.1

1.0.1

February 2014


April 2014

April 2014

El Hadji Malick Fall

Adam Tiamiou

El Hadji Malick Fall

Adam Tiamiou

First description


Second description

Third description

El Hadji Malick Fall

Adam Tiamiou

Adam Tiamiou

El Hadji Malick Fall

Month, dayth 2015


Month, dayth 2015

Month, dayth 2015


1. Introduction

1.1 Purpose of the requirements document

Les principaux objectifs de ce document qui décrit notre projet SmartClassroom sont:

  • permettre aux gens d'identifier les différentes parties de notre projet
  • permettre aux gens de découvrir les technologies que nous allons développer
  • expliquer l'approche adoptée et les solutions que nous avons proposées
  • fournir une base de travail pour de futures améliorations

1.2 Scope of the product

Ce projet se divise en deux parties :

  • Tableau interactif
  • Tiled display sur tables tactiles

L'objectif à long terme de ce projet est de mettre en place des technologies qui permettent d'améliorer les techniques d'enseignement dans les salles de cours. On peut ainsi imaginer des activités pédagogiques interactives comme des évaluations fournissant des réponses instantanées et permettant ainsi aux enseignants de voir quand un concept doit être révisé ou quand ils doivent aider davantage les élèves.

1.3 Definitions, acronyms and abbreviations

Goslate

C'est une API gratuite en python qui fournit un service de traduction de Google en interrogeant le service de Google Traduction.

Tesseract

Tesseract est un moteur open source de reconnaissance optique de caractères (ROC, OCR) qui peut être utilisé :
  • soit directement, en ligne de commandes soit par l'intermédiaire d'une interface graphique pour reconnaitre du texte avec mise en page basique ; cet usage est déjà fonctionnel.
  • soit avec des surcouches gérant les mises en page complexes, etc., comme ocropus (encore en version beta).


OpenCV

OpenCv ( Open Computer Vision) is a free graphic library mainly designed for real-time image processing.

SPADE

SPADE (Smart Python multi-Agent Devepment Environment) is multiagent and organizations platform based on the XMPP technology and written in the Python programming language. The SPADE Agent Library is a module for the Python programming language for building SPADE agents.XMPP is the Extensible Messaging and Presence Protocol, used for instant messaging, presence, multi-party chat, voice and video calls, etc.

1.4 References

General :

Ce projet est un projet d'école, qui se déroule à Polytech Grenoble, et soutenu par la Fabmstic. La page du projet sur le wiki air est disponible via ce lien [1]


Technical :

<A remplir>


Librairies:

Tess4J - http://tess4j.sourceforge.net/

ImageMagick - http://www.imagemagick.org/

Goslate: Free Google Translate API - http://pythonhosted.org/goslate/

1.5 Overview of the remainder of the document

The remainder of this document will present the technical characteristics of the project such as requirements and constraints, and user characteristics. Section three outlines the detailed, specific and functional requirements, performance, system and other related requirements of the project. Supporting information about appendices is also provided in this same section.

2. General description

The long-term goal of this project is to develop control software manipulator arm for supporting people with disabilities. There are commercial products of robotic arms which are unfortunately too expensive and does not have high level control system. Therefore, the aim is to develop a robotic arm which will be able to perform series of movements using markers. The part described in this document will focus on the detection of markers.

2.1 Product perspective

We have to consider the possible evolution of this project.

2.2 Product functions

Marker detection

Initially, From a webcam placed on the robotic arm, it should be possible to, in real time:

  • Detect a predefined type of marker placed on an object
  • Calculate its coordinates
  • Return the marker position
  • Save the coordinates of the marker on an XML file


Data transmission

Secondly, send the obtained data to the robot to enable it to move to this object. This requires several steps:

  • Connect to the robot using the client / server model based on the system of agents
  • Send the XML file which contains the position of the marker


Simulation

The simulator have to use this information to make the robot move in the right direction.

2.3 User characteristics

This project is intended to disabled people. The robotic arm will allow them to grab distant objects. The arm can be remote-controlled. Therefore, users can be developers who can create new sequences of instructions given to the robot.

2.4 General constraints

  • Unlike existing technologies, our robotic arm must have the feature of being able to detect an object with a marker and get it back.
  • It must also be able to interpret a series of instructions that will be given to him.
  • A format is imposed for writing data : XML.
  • The file format that records the result of the detection should not be changed. This may affect the detection of the position of the object when moving it.
  • The marker detection is written in C++
  • The system of agents is written in Python
  • The simulator is written in Python

2.5 Assumptions and dependencies

The system is based on ARUCO which enables programm to get back the position of the marker and SPADE which make the data transmission through an XML file possible.

3.Specific requirements, covering functional, non-functional and interface requirements

  • document external interfaces,
  • describe system functionality and performance
  • specify logical database requirements,
  • design constraints,
  • emergent system properties and quality characteristics.

3.1 Requirement X.Y.Z (in Structured Natural Language)

This diagram below describes how our project works.

GlobalArchitecture.png

First, the Client Agent launches the detection with ArUco and gets a marker vector. It applies then a regular expression that allows him to extract the coordinates of marker vector and transform them into XML tags that are successively transmitted to the Agent Server that receives them in order to form the file that will allow him to begin its simulation.

3.1.1 Markers detection

Description: The camera detects the marker on the object and locates it

Inputs: Video stream and markers

Source: ArUco library detection

Outputs: Video stream with markers highlighted with their id and an file which contains one of the found marker vector (its id and the coordinates of the four points which is made up of)

Destination: All people, all object which can be taken by the robot

Action: The ArUco library detects borders and analyzes in rectangular regions which may be markers. When a marker is found, its vector is write into a file.

Non functional requirements: The marker detection should be done in real time and faster

Pre-condition: Have a camera and ArUco library installed

Post-condition: The marker has to be well recognized.

Side-effects: Failure of the marker recognition

3.1.2 Coordinates transfer

Description: Coordinates are extracted in the marker vector and transferred as XML Tags

Inputs: A file

Source: Spade Agents (Client and Server), a platform connexion

Outputs: An XML file

Destination: The Agent Server

Action: The Agent Client applies a regular expression to extract coordinates into the file and send it to the Server

Non functional requirements: The messages have to be delivered quickly and arrive in order. A message must not be lost.

Pre-condition: Download, configure and launch SPADE platform connexion (in localhost or in a specific IP adress)

Post-condition: All messages have reached their destination in order

Side-effects: Acknowledgments are sent by the server

3.1.3 Graphic Simulation

Description: The XML file is parsed and then the marker position is calculted and displayed on a graphic interface

Inputs: An XML file

Source: Python xmldom and TKinter librairies

Outputs: A graphical interface

Destination: To the robot and people who used it (as the other group)

Action: The Server Agent interprets XML tags and gets back the coordinates of the corresponding point and display it. The point depicting the robot is also displayed and can be moved with the keyboard

Non functional requirements: The version of the XML file must be 1.0.

Pre-condition: The XML file is readable and contains no error

Post-condition: The point is displayed on the screen with the exact coordinates given in parameter

Side-effects: A pop-up window is generated when the robot reachs the marker's position

4. Product evolution

  • Remote-control
  • Use several types of markers
  • Send coordinates in real time on an interface
  • Giving to the robot type of markers to detect as parameters
  • Automatic recalibration of the position of the robot if the detected marker position is not optimal

5. Appendices

5.1 Specification

  • The global project's page can be found here
  • An other RICM4 group is working on this project. Their wiki page can be found there:
- Proj-2013-2014-BrasRobot-Handicap-2

6. Index