Proj-2013-2014-Sign2Speech-English

From air
Jump to navigation Jump to search
Map

Objective

The goal of our project is to allow speechless people to communicate with a computer thanks to sign langage. Our programm should be able to understand sign langage in order to obey to orders given. Then orders should be displayed under literal form and transformed into vocal signal thanks to a speech synthesis technology.

The programm should also be able to learn new hand movements to increase its database to recognize more ideas.

Software Requirements Specification

The Team

  • Tutor : Didier Donsez
  • Members: Arthur CLERC-GHERARDI, Patrick PEREA

State of the Art

Recognition of sign langage alphabet

Recognition of particular hand movements which mean ideas (ZCam camera)

Recognition of particular hand movements which mean ideas, and the traduction in spanish (OpenCV)

Recognition of particular hand movements which mean ideas (Kinect)

Tools

We will use two different technologies :

Map

The Leap motion is a device that allow you to control your computer with your hands. But there is no physical contact and the communication with the computer is based on hand movements. You put the Leap motion under your hands, next to the keyboard.

Compared to the Kinect, it is much smaller. The device has a size of 8 x 2,9 x 1,1 cm and a frame repetition rate of 200 Hz (the Kinect has a 30 Hz frame repetition rate). The Leap motion is composed of two webcams of 1,3 MP which film in stereoscopy and three infra-red light LED. It can detect the position of all ten fingers.

The official website has a section for developer. You can download the SDK 1.0 (almost 47 Mo) which contains API for following langages: C++, C#, Java, Python, Objective C and JavaScript. The SDK also contains examples for libraries and functions of the SDK.


Map

This camera is also a remote controller for the computer. You put it in front of people.

The Creative is provided of depth recognition, which enable developer to do the difference between shots. This camera films at around 30 fps in 720p.

Intel provides also a SDK for developers : [1]. Some of the libraries will help us for hand and fingers tracking and facial recognition.

Gallery

Pictures Gallery

Work in progress

We received the subject of the project the 21 of January (2014). This part will resume our progression.

Week 1 (January 27 - February 2)

  • Discovery of Intel Creative camera
  • Discovery and getting used to Intel SDK
  • Choice of programming language (C++)
  • First program of fingers recognition
  • First matter : Intel SDK is not finished. It gives us methods for fingertips, writs and hand center recognition. But, it can not make the difference between each fingers.
  • First bug detection with the camera : for each picture given by the depth sensor, a same finger can be detected more than once.

Week 2 (February 3 - February 9)

Map
  • Implementation of an "overlay" for the SDK to have a better fingers recognition. The goal is to know which finger the user is showing to the camera.
  • Correction of the bug : same finger dectected more than once.
  • Getting in contact with Intel through its developer forum.
  • Our camera may have an hardware problem which gives us turbulence on depth sensor picture. It remains to do more tests.
  • Adding of hand calibrator function (implementation not finished)
  • Definition of gestures made of symbols in sign language.
  • Choice of our data structure for symbols and gestures. When the user is doing a symbol with his hands, the program will look into symbols table if it exists. Therefore, the search should be efficient and quick to keep the real-time aspect. Symbols table can have more than hundred symbols.