Archive for the ‘Uncategorized’ Category
Finally C2M is Done!
MABROOOOOOOOOOOOOOOOOOOOOOOOOOOOK
Sa,
nazran li enni mab7otish 3al blog 5eer kol 5abar sa3eed w mofre7 😀
Guys .. 7dl we hv finished el application bel space w el backspace w kol el aflam di
w el double click .. w 3amlna kaman el look down yi swap..
na2is bs isA el test case w el demo video …
da3watko .. SALAM
A couple of toys and we’re ready to rock !
Dears,
Â
The post title is weird I know :). So, Here’re the final project tasks. Please reply on the mail sent with the task(s) you’ll be participating in isA beside points that are obligatory.
Agenda:
=======
1. Having a good lunch together (Obligatory 🙂 )
2. Emotiv Issues
a. Try to order the Development Headset
b. Try to use the SDK Lite
3. Documentation (Obligatory)
4. Combining the SSVEP and Blink applications
a. Requires cleaning the last recorded data and testing it
b. Integrate the two applications
5. Preparing for the Final Seminar (Obligatory)
6. Start implementing the P300 Technique
What i’ve done so far
My Dear Team ,
This is my first post, srry for being late , my progress will be handled as subjects :
Subject One : SSVEP experiment
– SSVEP stands for Steady State Visually Evoked Potentials, they are signals that are natural response to visual stimulation at specific frequencies. When the retina is excited by a visual stimulus ranging from 3.5 to 70 Hz, the brain generates electrical activity at the same (or multiples fo) the visual stimulus.We choose this technique due to the excellent signal to noise ratio and relative immunity to artifacts. SSVEPs provide means to characterize preferred frequencies of neo cortical dynamic progress regardless its high transfer rate.
–Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â
Four small check boards flickering at different but fixed frequencies move  along with a Navigated car . The subject is able to control the movement
of the car by focusing his or her attention on a specific check board , SSVEP’s are generated ,we can realize the check board his gazing at using one of the detection techniques and eventually performing the required navigation.
-Frequency Detection Techniques I’ve learned about are the CCA and the PRSAÂ :
PRSA (research only) : Used with high frequencies starting from 43 to 46 hz , it basically quantifies the quasi periodic Oscillations (have nature of periodic signals) masked by the non stationary nature of composite signals and noise , it basically compresses the signal into a much shorter sequence, keeping all relevant quasi periodic s but eliminating non stationaries , artifacts and noise .
CCA , it is used in frequency recognition of multiple channel signals , the unkown signals are compared against known templates and their frequencies are recognized according to the resulting highest correaltion coefficient , how does that work, it searches for the highest correlation coefficients between two sets of variables which have a certain correaltion , one set conatins signals with frequencies similar to those of the flickering objects and the other conatins the generated SSVEP’s, by calculating the highest correlation coefficient we can detect the SSVEP frequency and accordingly determine the subjects intended target.
I tried to apply this technique on signal data and generated stimulus frequencies using the mat lab and the results were inconsistent and independent, the highest coefficient was way below the expected results even after outlier removal functions which may have affected the resulting coefficients in addition to its low performance in C# integration , these reasons made this technique not suitable and made us use the Fourier instead ..
Subject 2 : 3D face
The blink detection application made by Hani and Saleh , needed an animated face rather than the prototype 2d face integrated with it , by creating a WPF application and integrating a zam3d animated 3dface with expression blend producing an xaml code places n an wpf window we were able to create a 3d environment , when a blink is detected a story board is triggered (the face animation which is the blink) , the face needs some modifications and a look up and look down animation as well , and thought of making him track the cursor if possible..
Subject Three : P300 Speller
Menna needed a P3oo speller for the P3oo technique , its a 6*6 matrix containing the alphabetic letters and numbers fom 0 to 9 , it’s main idea is that the rows and columns flash randomly one at a time , why not sequentially ?? to increase the p3oo signal amplitude cause it is proportional wiz the expectancy of the flash..they are two speeds applied , fast ISI (“312.5 ms “inter stimulus interval) ans slow ISI (62.5ms), short ISI produce higher ITR (information transfer rate) while high ISI produced higher amplitudes , when a p3oo is generated the indicates an intended row or column , the second time its generated indicated an intended intersecting row or column the intersection of them is the targeted letter..
Subject four: Blinks And Speller integration
Using the p3oo speller and the blink detection technique i made the occurrence of a blink a selection of the current flashing row or column, and the the next blink accordingly , at last displaying the intended intersection cell .. we used the speller along with blinks in case we couldn’t apply p3oo for the keyboard usage…
The figure to the right is the P300 speller
Finally i now consider myself a Cap 2 Monitor official blog user, with a line underneath me 🙂
Minutes of Meeting (Last Meeting)
MoMs:
=====
1. Menna and omar did the application in the form of colored
regions on the desktop
2. Basma integrated the blink detector with the flashing
speller
3. Saleh and Hani did the SSVEP online application working on
mina’s datasets
Great Work team, I can see the end of our journey very soon isA 🙂
Eye Techniques Library
Dear Team,
I sent you an e-mail that have the Eye techniques library attached in it, I just wanted to mention how to use it , here are the steps:
1. Add reference to that library
2. ClsLookDownDetection bd = new ClsLookDownDetection();
bd.LookDownOccured += new EventHandler(bd_LookDownOccured);
bd.Start();
3. Add the logic that you want to perform in the event handler bd_LookDownOccured and that’s it you’re ready to use it 😀
An important thing that you have the Virtual Acquisition System running and transmitting the signal so that the library will work on it
For Menna: I’ll change the name of the methods as we agreed don’t worry 🙂
Best Regards,
Hani Amr
Emotiv SDKLite !
Dear Team,
Let’s wake up again and focus. I have two issues to talk about in that topic:
1. Please all register in http://www.Imaginecup.com and complete your profile in order to compete (Remember that’s was an objective for us).
2. I forwarded you all an email that is sent to me by Emotiv that contains an instructions on how to download the SDKLite which according to what they say can help increase the usability in applications and can integrate the 3 detection suites that they built (Afftectiv, Expressiv and Cognitiv). So, I’m downloading it now as it may help us so much I guess.
By the way, in the ITIDA document we only mentioned the Emotiv Headset as our hardware as it can be within the range of 10,000 LE (Maximum Budget).
Best Regards,
Hani Amr
About Cap-2-Monitor
Cap 2 Monitor is an attempt to communicate directly a brain with a computer. This is of the utmost interest for people with severe motor disabilities, who cannot use the standard communication devices like keyboards or cursors.
It relies on non-invasive EEG (electroencephalogram) electrodes, which are attached to the scalp. The electrodes detect the EEG signals related to motor intentions, like the preparation to move the left hand, or just imagining making such movement. Once the EEG signal has been decoded, it can be used to move a cursor on the screen, or to execute commands in a computer. For instance, the intention to move the right hand can be used to move the cursor to the right, and so on.
Cap-2-Monitor .. Hello World
Here, we keep track to our graduation project Cap-2-Monitor.