Project 2014-2015-CannonBall

=Preambule=

The project subject CannonBall_de_voitures_autonomes

This project is handled from years to years by Polytech and Ensimag Students :

From january 13th 2014 to march 2nd 2014 : Jules Legros and Benoit Perruche from Polytech'Grenoble. Link Wiki Air

From may 26th 2014 to june 16th 2014 : Thibaut Coutelou, Benjamin Mugnier and Guillaume Perrin from Ensimag. Link

From january 14th 2015 march 2nd 2015 : Malek Mammar and Opélie Pelloux-Prayer from Polytech'Grenoble. Here

This project is based on the work of Jules Legros and Benoit Perruche.

It has also been handled by Thibaut Coutelou, Benjamin Mugnier and Guillaume Perrin.

It is now handled by four Polytech students : Alexandre LE JEAN / Malek MAMMAR / Ophélie PELLOUX-PRAYER / Hugo RODRIGUES

=Project presentation=

= Team =


 * Supervisors : Amr Alyafi, Didier Donsez


 * Members : Malek MAMMAR / Ophélie PELLOUX-PRAYER / Alexandre LE JEAN / Hugo RODRIGUES


 * Departement : RICM 4, Polytech Grenoble

=Specifications=


 * take OpenCV in hands on Windows 8, Android and Ubuntu
 * perform a benchmark OpenCV for recognition markers on different platforms and different models of cameras
 * achieve a steering algorithm vehicle (trajectory control, control of acceleration, ...)
 * perform a vehicle driving simulator.
 * evaluate the algorithms on vehicles.
 * set up a server for collecting infrastructure vehicle driving parameters
 * propose and implement a consensus algorithm which allows multiple vehicles to travel in convoy maintaining a distance (safety) between them. For this, the speed and location information recorded by vehicles and road radars will be broadcast reliably and in real time between the rolling vehicles on the routes or route segment.

=Links=

GitHub
GitHub

Documents
Developer Guide

Software Requirements Specifications

UML Diagram

= Progress of the project =

The project started January 14th, 2015.

Week 1 (January 13th - January 18th)

 * Project discovery
 * Discovery of OpenCV
 * Material recovery

Week 2 (January 19th - January 25th)
Work in software engineering Software Requirements Specifications

How to set up the project Developer Guide

Technology watch

 * The idea of using a rasberry pi for images processing was put away because of a lack of cpu power. Pour se donner un ordre d'idée le core i3 de la tablette Lenovo Thinkpad ....a ajouter des chiffre par rapport à la tablette


 * The idea of using a raspberry pi to broadcast the video stream to a local server and processing it by a another machine is not conceivable because of the transmission latency of the stream. To give an order of magnitude if the latency is ~1 second, a car at 30 km/h would carry ~8.3 m before the second machine performs its processing.


 * The idea of using a mini and compact tower (example: GB-XM1-3537) is interesting but the absence of in board batteries makes it dependente to a 220 Volts power source.

Moreover a phone is small and has onboard battery. The main concern of our predecessor was that external cameras were not supported on android system. -> We will check on this, because what was true a year ago, might not be today.
 * The idea of using a phone is interesting because of the richness and accuracy of the sensors (accelerometer, gyroscope, ...). Especially if it has a Tegra type processor, offering greater processing performance. Notice that a simple ARM processor would not do it.

Technologies chosen

 * We keep MongoBD database because it offers high write performance and that it is scalable.


 * We keep Node.js for creating a local server through which the tablet can display all the data. We use different modules :
 * socket.io for real time messaging between the client and the server
 * mqtt for a subscribe/pusblish protocol
 * mongoose for manipulating the MongoDB database from JavaScript


 * We keep arUco library which we judge to be powerful enough for our requirements. Plus, it is based on opencv library.

What we have to accomplish

 * Finish the prototype in order to be able to present it march 18th, 2015.
 * Redirect the metrics and the webcam stream from the car to any computer accessing to the tablet network
 * Enhance the algorithms of the car.
 * Create a car convoy algorithm.

What we want to do

 * Skip to a Linux environnent with a none proprietary IDE to replace Visual Studio 2013
 * We will stick to C++ language because it is more close of the opencv library which is actually coded in C++ language.

What is expected from us to consider

 * Enhense the moves of the mini car. There are two ways to do so :
 * Add a shield (MEMS) to the uno arduino
 * Replace the arduino with a STM32 card which is already equipped with embedded sensors. MEMS (microelectromechanical sensors including accelerometers, gyroscopes, digital compasses, inertial modules, pressure sensors, humidity sensors and microphones).
 * An application with an Oculus device

The risks

 * Our ability to take over our predecessors' code, and not going back from scratch
 * Misunderstanding and communication problems amongst us and also with 3i team
 * Arriving to an unrealizable situation due to material choice
 * Reaching a speed limit due to the tablet ability for image processing

Week 3 (January 26th - February 01st)
Notice that we have been joined by two colleagues, A.Le Jean and H.Rodriguez.


 * New distribution of tasks : two of us will work on the network part and the two others will work on the optimisation of the opencv use
 * We have managed to compile the code of our predecessors
 * Meeting with 3i team for coordinating our efforts with the aim of gloabale améliioration the project. 3 proposals were made from them:
 * Installation of a mechanical arm on the car
 * Make the camera swivel 180 degrees
 * Change the Arduino board with a STM32 (more equipped with sensors) to improve the precision of movement
 * Change the Arduino board with a board created for image processing purpose and evaluate the possible performance gain

Week 4 (February 02nd - February 08th)

 * We made an optimisation from 3 fps to 15 fps in open use