https://air.imag.fr/api.php?action=feedcontributions&user=Laurent.Zominy&feedformat=atomair - User contributions [en]2024-03-29T08:39:14ZUser contributionsMediaWiki 1.35.13https://air.imag.fr/index.php?title=Proj-2014-2015-Regie_Video_Autonome_Et_Mobile_Multicamera/SRS&diff=21695Proj-2014-2015-Regie Video Autonome Et Mobile Multicamera/SRS2015-03-09T08:58:26Z<p>Laurent.Zominy: </p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| TBC<br />
| BODARD Christelle, QIAN Jean, ZOMINY Laurent<br />
| TBC<br />
| TBC<br />
| TBC<br />
<br />
|}<br />
<br />
<br />
=1. Introduction=<br />
==1.1 Purpose of the requirements document==<br />
<br />
This Software Requirements Specification (SRS) identifies the requirements for the Autonomous and Mobile Video Control.<br />
<br />
==1.2 Scope of the product==<br />
<br />
This project is based on a robot named RobAir, supplied by a camera. The purpuse is to enable the robot to recognize a specific person and follow him; the list of people to be recognized will be sent by an Android app.<br />
<br />
==1.3 Definitions, acronyms and abbreviations==<br />
<br />
*'''Android''': The most widely used mobile OS. Here it is used to send image or video frame to the robot.<br />
*'''Face recognition''': To automatically identify or verify a person from a digital image or a video frame from a video source<br />
*'''OpenCV''': It is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.<br />
*'''Python''': It is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. Here it is used to implement the face recognition using OpenCV.<br />
<br />
==1.4 References==<br />
==1.5 Overview of the remainder of the document==<br />
<br />
=2. General description=<br />
==2.1 Product perspective==<br />
<br />
Our system is divided into 3 parts:<br />
*The face images of target persons are taken and sent to the robot by the Android application.<br />
*The robot detects the target person among all the people within view of the cameras.<br />
*The robot automatically follow the target person.<br />
<br />
==2.2 Product functions==<br />
<br />
*'''Update of the list of target persons on the fly''': We can use the Android application to take photos of new target persons and send to the robot whenever we need.<br />
*'''Face recognition''': The robot will automatically detect the target persons among all persons within the view of the cameras. <br />
*'''Follow''': The robot will follow the persons whose image is sent by the Android app.<br />
*'''Resize''': The robot will automatically resize the window to have focus on target visage.<br />
<br />
==2.3 User characteristics==<br />
<br />
The user needn't know anything! The robot will automatically follow the target person. He just has to take pictures about person's visage.<br />
<br />
==2.4 General constraints==<br />
<br />
<br />
==2.5 Assumptions and dependencies==<br />
The accuracy of face detection depends highly on two parts:<br />
*The quality of face samples<br />
*The efficiency and accuracy of the face detection algorithm.<br />
<br />
=3.Specific requirements, covering functional, non-functional and interface requirements=<br />
* document external interfaces,<br />
* describe system functionality and performance<br />
* specify logical database requirements,<br />
* design constraints,<br />
* emergent system properties and quality characteristics.<br />
<br />
==3.1 Requirement X.Y.Z (in Structured Natural Language)==<br />
'''Function''': Learn someone's face to be able to recognize this person and keep watching him . <br />
<br />
'''Description''': <br />
<br />
'''Inputs''': Faces pictures<br />
<br />
'''Source''': Android Smartphone<br />
<br />
'''Outputs''': Detection and recognition<br />
<br />
'''Destination''': Robot<br />
<br />
'''Action''': <br />
<br />
* The user must take pictures about target people and sent them by our application to the database.<br />
* The robot must detect faces and may recognize them. Then it must follow the recognized face.<br />
<br />
* Graphical Notations : UML Sequence w/o collaboration diagrams, Process maps, Task Analysis (HTA, CTT)<br />
* Mathematical Notations<br />
* Tabular notations for several (condition --> action) tuples<br />
<br />
'''Functional requirements''':<br />
<br />
* Detect a face<br />
* Recognize the face<br />
* Track recognized face<br />
* Priority on faces for tracking<br />
<br />
'''Non functional requirements''':<br />
<br />
* Accuracy<br />
* Rapidity<br />
* Fluidness on the movement of the camera <br />
<br />
'''Pre-condition''':<br />
<br />
<br />
<br />
'''Post-condition''':<br />
<br />
<br />
<br />
'''Side-effects''':<br />
<br />
<br />
<br />
=4. Product evolution=<br />
<br />
=5. Appendices=<br />
=6. Index=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-Regie_Video_Autonome_Et_Mobile_Multicamera/SRS&diff=21694Proj-2014-2015-Regie Video Autonome Et Mobile Multicamera/SRS2015-03-09T08:55:10Z<p>Laurent.Zominy: </p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| TBC<br />
| BODARD Christelle, QIAN Jean, ZOMINY Laurent<br />
| TBC<br />
| TBC<br />
| TBC<br />
<br />
|}<br />
<br />
<br />
=1. Introduction=<br />
==1.1 Purpose of the requirements document==<br />
<br />
This Software Requirements Specification (SRS) identifies the requirements for the Autonomous and Mobile Video Control.<br />
<br />
==1.2 Scope of the product==<br />
<br />
This project is based on a robot named RobAir, supplied by a camera. The purpuse is to enable the robot to recognize a specific person and follow him; the list of people to be recognized will be sent by an Android app.<br />
<br />
==1.3 Definitions, acronyms and abbreviations==<br />
<br />
*'''Android''': The most widely used mobile OS. Here it is used to send image or video frame to the robot.<br />
*'''Face recognition''': To automatically identify or verify a person from a digital image or a video frame from a video source<br />
*'''OpenCV''': It is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.<br />
*'''Python''': It is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. Here it is used to implement the face recognition using OpenCV.<br />
<br />
==1.4 References==<br />
==1.5 Overview of the remainder of the document==<br />
<br />
=2. General description=<br />
==2.1 Product perspective==<br />
<br />
Our system is divided into 3 parts:<br />
*The face images of target persons are taken and sent to the robot by the Android application.<br />
*The robot detects the target person among all the people within view of the cameras.<br />
*The robot automatically follow the target person.<br />
<br />
==2.2 Product functions==<br />
<br />
*'''Update of the list of target persons on the fly''': We can use the Android application to take photos of new target persons and send to the robot whenever we need.<br />
*'''Face recognition''': The robot will automatically detect the target persons among all persons within the view of the cameras. <br />
*'''Follow''': The robot will follow the persons whose image is sent by the Android app.<br />
*'''Resize''': The robot will automatically resize the window to have focus on target visage.<br />
<br />
==2.3 User characteristics==<br />
<br />
The user needn't know anything! The robot will automatically follow the target person. He just has to take pictures about person's visage.<br />
<br />
==2.4 General constraints==<br />
<br />
<br />
==2.5 Assumptions and dependencies==<br />
The accuracy of face detection depends highly on two parts:<br />
*The quality of face samples<br />
*The efficiency and accuracy of the face detection algorithm.<br />
<br />
=3.Specific requirements, covering functional, non-functional and interface requirements=<br />
* document external interfaces,<br />
* describe system functionality and performance<br />
* specify logical database requirements,<br />
* design constraints,<br />
* emergent system properties and quality characteristics.<br />
<br />
==3.1 Requirement X.Y.Z (in Structured Natural Language)==<br />
'''Function''': Learn someone's face to be able to recognize this person and keep watching him . <br />
<br />
'''Description''': <br />
<br />
'''Inputs''': Faces pictures<br />
<br />
'''Source''': Android Smartphone<br />
<br />
'''Outputs''': Detection and recognition<br />
<br />
'''Destination''': Robot<br />
<br />
'''Action''': <br />
* Natural language sentences (with MUST, MAY, SHALL)<br />
<br />
* The user must take pictures about target people and sent them by our application to the database.<br />
<br />
* Graphical Notations : UML Sequence w/o collaboration diagrams, Process maps, Task Analysis (HTA, CTT)<br />
* Mathematical Notations<br />
* Tabular notations for several (condition --> action) tuples<br />
<br />
'''Functional requirements''':<br />
<br />
* Detect a face<br />
* Recognize the face<br />
* Track recognized face<br />
* Priority on faces for tracking<br />
<br />
'''Non functional requirements''':<br />
<br />
* Accuracy<br />
* Rapidity<br />
* Fluidness on the movement of the camera <br />
<br />
'''Pre-condition''':<br />
<br />
<br />
<br />
'''Post-condition''':<br />
<br />
'''Side-effects''':<br />
<br />
=4. Product evolution=<br />
<br />
=5. Appendices=<br />
=6. Index=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-Regie_Video_Autonome_Et_Mobile_Multicamera/SRS&diff=21674Proj-2014-2015-Regie Video Autonome Et Mobile Multicamera/SRS2015-03-08T21:03:34Z<p>Laurent.Zominy: </p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| TBC<br />
| BODARD Christelle, QIAN Jean, ZOMINY Laurent<br />
| TBC<br />
| TBC<br />
| TBC<br />
<br />
|}<br />
<br />
<br />
=1. Introduction=<br />
==1.1 Purpose of the requirements document==<br />
This Software Requirements Specification (SRS) identifies the requirements for the Autonomous and Mobile Video Control.<br />
<br />
==1.2 Scope of the product==<br />
This project is based on a robot named RobAir, supplied by a camera. The purpuse is to enable the robot to recognize a specific person and follow him; the list of people to be recognized will be sent by an Android app.<br />
<br />
==1.3 Definitions, acronyms and abbreviations==<br />
*'''Android''': The most widely used mobile OS. Here it is used to send image or video frame to the robot.<br />
*'''Face recognition''': To automatically identify or verify a person from a digital image or a video frame from a video source<br />
*'''OpenCV''': It is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.<br />
*'''Python''': It is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. Here it is used to implement the face recognition using OpenCV.<br />
<br />
==1.4 References==<br />
==1.5 Overview of the remainder of the document==<br />
<br />
=2. General description=<br />
==2.1 Product perspective==<br />
Our system is divided into 3 parts:<br />
*The face images of target persons are taken and sent to the robot by the Android application.<br />
*The robot detects the target person among all the people within view of the cameras.<br />
*The robot automatically follow the target person.<br />
<br />
==2.2 Product functions==<br />
*'''Update of the list of target persons on the fly''': We can use the Android application to take photos of new target persons and send to the robot whenever we need.<br />
*'''Face recognition''': The robot will automatically detect the target persons among all persons within the view of the cameras. <br />
*'''Follow''': The robot will follow the persons whose image is sent by the Android app.<br />
*'''Resize''': The robot will automatically resize the window to have focus on target visage.<br />
<br />
==2.3 User characteristics==<br />
The user needn't know anything! The robot will automatically follow the target person. He just has to take pictures about person's visage.<br />
<br />
==2.4 General constraints==<br />
<br />
La je sais pas quoi mettre<br />
<br />
==2.5 Assumptions and dependencies==<br />
The accuracy of face detection depends highly on two parts:<br />
*The quality of face samples<br />
*The efficiency and accuracy of the face detection algorithm.<br />
<br />
=3.Specific requirements, covering functional, non-functional and interface requirements=<br />
* document external interfaces,<br />
* describe system functionality and performance<br />
* specify logical database requirements,<br />
* design constraints,<br />
* emergent system properties and quality characteristics.<br />
<br />
==3.1 Requirement X.Y.Z (in Structured Natural Language)==<br />
'''Function''': Learn someone's face to be able to recognize this person and keep watching him . <br />
<br />
'''Description''': <br />
<br />
'''Inputs''': Faces pictures<br />
<br />
'''Source''': Android Smartphone<br />
<br />
'''Outputs''': Detection and recognition<br />
<br />
'''Destination''': Robot<br />
<br />
'''Action''': <br />
* Natural language sentences (with MUST, MAY, SHALL)<br />
<br />
* The user must take pictures about target people and sent them by our application to the database.<br />
<br />
* Graphical Notations : UML Sequence w/o collaboration diagrams, Process maps, Task Analysis (HTA, CTT)<br />
* Mathematical Notations<br />
* Tabular notations for several (condition --> action) tuples<br />
<br />
'''Functional requirements''':<br />
<br />
* Detect a face<br />
* Recognize the face<br />
* Track recognized face<br />
* Priority on faces for tracking<br />
<br />
'''Non functional requirements''':<br />
<br />
* Accuracy<br />
* Rapidity<br />
* Fluidness on the movement of the camera <br />
<br />
'''Pre-condition''':<br />
<br />
<br />
<br />
'''Post-condition''':<br />
<br />
'''Side-effects''':<br />
<br />
=4. Product evolution=<br />
<br />
=5. Appendices=<br />
=6. Index=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-Regie_Video_Autonome_Et_Mobile_Multicamera/SRS&diff=21639Proj-2014-2015-Regie Video Autonome Et Mobile Multicamera/SRS2015-03-08T16:31:55Z<p>Laurent.Zominy: </p>
<hr />
<div>The document provides a template of the Software Requirements Specification (SRS). It is inspired of the IEEE/ANSI 830-1998 Standard.<br />
<br />
<br />
'''Read first:'''<br />
* http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/Presentations/PPTX/Ch4.pptx<br />
* http://en.wikipedia.org/wiki/Software_requirements_specification<br />
* [http://www.cse.msu.edu/~chengb/RE-491/Papers/IEEE-SRS-practice.pdf IEEE Recommended Practice for Software Requirements Specifications IEEE Std 830-1998]<br />
<br />
{|class="wikitable alternance"<br />
|+ Document History<br />
|-<br />
|<br />
!scope="col"| Version<br />
!scope="col"| Date<br />
!scope="col"| Authors<br />
!scope="col"| Description<br />
!scope="col"| Validator<br />
!scope="col"| Validation Date<br />
|-<br />
!scope="row" |<br />
| 0.1.0<br />
| TBC<br />
| BODARD Christelle, QIAN Jean, ZOMINY Laurent<br />
| TBC<br />
| TBC<br />
| TBC<br />
<br />
|}<br />
<br />
<br />
=1. Introduction=<br />
==1.1 Purpose of the requirements document==<br />
This Software Requirements Specification (SRS) identifies the requirements for the Autonomous and Mobile Video Control.<br />
<br />
==1.2 Scope of the product==<br />
This project is based on a robot named RobAir, supplied by a camera. The purpuse is to enable the robot to recognize a specific person and follow him; the list of people to be recognized will be sent by an Android app.<br />
<br />
==1.3 Definitions, acronyms and abbreviations==<br />
*'''Android''': The most widely used mobile OS. Here it is used to send image or video frame to the robot.<br />
*'''Face recognition''': To automatically identify or verify a person from a digital image or a video frame from a video source<br />
*'''OpenCV''': It is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.<br />
*'''Python''': It is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. Here it is used to implement the face recognition using OpenCV.<br />
<br />
==1.4 References==<br />
==1.5 Overview of the remainder of the document==<br />
<br />
=2. General description=<br />
==2.1 Product perspective==<br />
Our system is divided into 3 parts:<br />
*The face images of target persons are taken and sent to the robot by the Android application.<br />
*The robot detects the target person among all the people within view of the cameras.<br />
*The robot automatically follow the target person.<br />
<br />
==2.2 Product functions==<br />
*'''Update of the list of target persons on the fly''': We can use the Android application to take photos of new target persons and send to the robot whenever we need.<br />
*'''Face recognition''': The robot will automatically detect the target persons among all persons within the view of the cameras. <br />
*'''Follow''': The robot will follow the persons whose image is sent by the Android app.<br />
*'''Resize''': The robot will automatically resize the window to have focus on target visage.<br />
<br />
==2.3 User characteristics==<br />
The user needn't know anything! The robot will automatically follow the target person(user).<br />
<br />
==2.4 General constraints==<br />
==2.5 Assumptions and dependencies==<br />
The accuracy of face detection depends highly on two parts:<br />
*The quality of face samples<br />
*The efficiency and accuracy of the face detection algorithm.<br />
<br />
=3.Specific requirements, covering functional, non-functional and interface requirements=<br />
* document external interfaces,<br />
* describe system functionality and performance<br />
* specify logical database requirements,<br />
* design constraints,<br />
* emergent system properties and quality characteristics.<br />
<br />
==3.1 Requirement X.Y.Z (in Structured Natural Language)==<br />
'''Function''': Learn someone's face to be able to recognize this person and keep watching him . <br />
<br />
'''Description''': <br />
<br />
'''Inputs''': someone image<br />
<br />
'''Source''': <br />
<br />
'''Outputs''': detection and recognition<br />
<br />
'''Destination''': <br />
<br />
'''Action''':<br />
* Natural language sentences (with MUST, MAY, SHALL)<br />
* Graphical Notations : UML Sequence w/o collaboration diagrams, Process maps, Task Analysis (HTA, CTT)<br />
* Mathematical Notations<br />
* Tabular notations for several (condition --> action) tuples<br />
<br />
'''Functional requirements''':<br />
*Detect a face<br />
*Recognize the face<br />
*track the face recognized<br />
*priority on faces for tracking<br />
<br />
'''Non functional requirements''':<br />
*accuracy<br />
*rapidity<br />
*fluidness on the movement of the camera <br />
'''Pre-condition''':<br />
<br />
'''Post-condition''':<br />
<br />
'''Side-effects''':<br />
<br />
=4. Product evolution=<br />
<br />
=5. Appendices=<br />
=6. Index=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera&diff=20102Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera2015-01-19T14:22:02Z<p>Laurent.Zominy: /* Encadrant/Client */</p>
<hr />
<div>=Présentation=<br />
==Equipe==<br />
===Encadrant/Client===<br />
* Thierry Cravoisier : thierry.cravoisier@free.fr<br />
* Didier Donsez : didier.donsez@imag.fr<br />
<br />
===Etudiants===<br />
* Christelle Bodard (Chef de projet) : christelle.bodard@hotmail.com<br />
* Jean Qian : xuey90@gmail.com<br />
* Laurent Zominy : laurent.zominy@wanadoo.fr<br />
<br />
==Projet==<br />
Ce projet a pour but de créer une régie vidéo mobile et autonome basée sur la structure du Rob'Air. Notre robot devra donc filmer des personnes précises grâce à la reconnaissance faciale et ensuite suivre les mouvements de ces personnes. Par la suite, les vidéos seront mixées puis envoyées à un ordinateur via WIFI afin de pouvoir être visionnées. <br />
<br />
=Déroulement du projet=<br />
<br />
<br />
==Planning du projet==<br />
<br />
<br />
=Etat du projet=<br />
<br />
<br />
=Références=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera&diff=20101Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera2015-01-19T14:21:40Z<p>Laurent.Zominy: /* Encadrant/Client */</p>
<hr />
<div>=Présentation=<br />
==Equipe==<br />
===Encadrant/Client===<br />
* Thierry Cravoisier: thierry.cravoisier@free.fr<br />
* Didier Donsez: didier.donsez@imag.fr<br />
<br />
===Etudiants===<br />
* Christelle Bodard (Chef de projet) : christelle.bodard@hotmail.com<br />
* Jean Qian : xuey90@gmail.com<br />
* Laurent Zominy : laurent.zominy@wanadoo.fr<br />
<br />
==Projet==<br />
Ce projet a pour but de créer une régie vidéo mobile et autonome basée sur la structure du Rob'Air. Notre robot devra donc filmer des personnes précises grâce à la reconnaissance faciale et ensuite suivre les mouvements de ces personnes. Par la suite, les vidéos seront mixées puis envoyées à un ordinateur via WIFI afin de pouvoir être visionnées. <br />
<br />
=Déroulement du projet=<br />
<br />
<br />
==Planning du projet==<br />
<br />
<br />
=Etat du projet=<br />
<br />
<br />
=Références=</div>Laurent.Zominyhttps://air.imag.fr/index.php?title=Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera&diff=20100Proj-2014-2015-RegieVideoAutonomeEtMobileMulticamera2015-01-19T14:20:06Z<p>Laurent.Zominy: Created page with "=Présentation= ==Equipe== ===Encadrant/Client=== * Thierry Cravoisier(thierry.cravoisier@free.fr) * Didier Donsez(didier.donsez@imag.fr) ===Etudiants=== * Christelle Bodard (..."</p>
<hr />
<div>=Présentation=<br />
==Equipe==<br />
===Encadrant/Client===<br />
* Thierry Cravoisier(thierry.cravoisier@free.fr)<br />
* Didier Donsez(didier.donsez@imag.fr)<br />
===Etudiants===<br />
* Christelle Bodard (Chef de projet) : christelle.bodard@hotmail.com<br />
* Jean Qian : xuey90@gmail.com<br />
* Laurent Zominy : laurent.zominy@wanadoo.fr<br />
<br />
==Projet==<br />
Ce projet a pour but de créer une régie vidéo mobile et autonome basée sur la structure du Rob'Air. Notre robot devra donc filmer des personnes précises grâce à la reconnaissance faciale et ensuite suivre les mouvements de ces personnes. Par la suite, les vidéos seront mixées puis envoyées à un ordinateur via WIFI afin de pouvoir être visionnées. <br />
<br />
=Déroulement du projet=<br />
<br />
<br />
==Planning du projet==<br />
<br />
<br />
=Etat du projet=<br />
<br />
<br />
=Références=</div>Laurent.Zominy