Project AlphaUI : Computer Vision and Virtual Menu Navigation

We have always sought of new ways to interact with computers. From typed commands to automatic speech recognition, the aim is to make it appear natural to us, as if not interacting with a computer but a Human.

Project AlphaUI

AlphaUI is a virtual menu interface that lets you interact naturally with the GUI displayed. It works by using a webcam to capture live frames and through Image Processing finds out where the user wants to point out in the given space.



The Program is written in C++ using OpenCV 3.1.0 Library and performs the following operations on each image from which relevant information is extracted.
For the project to be demonstrated, I have utilised my computer vision project: Automatic Face Recognition System. The AlphaUI interface is built on top of the Face Recognition System with a custom GUI giving integrity to both projects.The functional response of interface have been disabled for the demo. Any developer can define their own GUI for their system that require user interaction in the same way.

Screenshots of the system: 

The AlphaUI interface

Ball tracked continuously by the system

Touchless Interaction with the interface


The system can be trained on any object of interest provided it is distinct in color (read HSV segmentation) . The training is done by repeatedly marking all over the object with the mouse pointer. This step has to be done only once in a lifetime or when you need to use a new marker. The values are saved in a text file to be reused in next run.

Disclaimer:
This project was done about an year ago but never saw daylight until now. What would you do with the possibilities of this project? Do comment and let me know.

Step By Step One Goes Very Far

Used:
Ubuntu 16.04
Code::Blocks IDE
OpenCV 3.1.0 : C++