My final idea is implementing neural networks on gesture recognition. I have started with one of Shiffman’s code. It is recognising ’0′ and ’1′, by using pixel data. So, I started with using this code and one finger. It was workin very well for ’0′ and ’1′ cause they are very different from each other, but using pixel data is not a very good way for similar objects. I found a Flash app at bytearray.org. They divided the mouse recordings in to arrays and give each direction different number(there is 8 direction). So, I got the mouse inputs which are not equal to previous mouse location and put 20 sample data in to database. Also I used – mouse vector length/ summation of all distance- for each sample in my database. That makes scalling easier. Actually, if we think about a triangle with sides 3,4,5 cm, what I am recording for each side is (according to directions) 3/12,4/12,5/12, and that makes scalling easier.
I am planing to make some more imporevements to this project and create a database which grove ups in it. My idea is letting people in ITP to correlate some gestures with their drawings. I have to work on this ide more during the summer.
Here is the processing app wit pixel data (need a camera)
Here is the processing app with vectors (works with mouse)