Thursday, June 26, 2014

Week Three - Day Two

Today, I continued learning about Fourier Transform, which is what I started reading about between Day One and Day Two. This is to help with the python project that analysis the data collected from the EEG. The signals collected, which are just functions on a graph, will be turned into functions of frequencies to be analyzed by the python code.

Also: Found out that Emotiv EPOC only runs for 800 seconds before crashing.

The next few pieces of data are also in the PowerPoint shown below but will be summarized. Fourier Transform is linear since it is both homogeneous and additive.
For homogeneous systems, amplifying the input will likewise amplify the output. S{af(t)} = aS{f(t)}
For additive systems, the response of the sum is the sum of the responses. S{f1(t) + f2(t)} = S{f1(t)} + S{f2(t)}
All linear systems produce an output of zero when the input is zero. S{0} = 0

Euler's Identity:
e^i*2pi*t = cos(2pi*t) + isin(2pi*t)

Fourier analysis of a periodic function refers taking the sine and cosine components of the overall "weird" function and separating them into simplified pieces of a whole. An example would be if you played piano, the keys in a cord are played at the same time and therefore you hear one sound, but when you use fourier analysis, the frequencies of each key are taken into account and therefore you get three different frequencies, which means three different waves. This is what transforming a function of time into a function of frequency means. And if you take the inverse, you are transforming a function of frequency into a function of time.



Fast Fourier Transform is an algorithm to compute the Discrete Fourier Transform and its inverse. A transform is a mapping between two sets of data/domains (time domain, frequency domain, or even a space domain) The real component would be an even function and the imaginary component would be an odd function on the real (x-axis) and imaginary (y-axis) plane.

Wednesday, June 25, 2014

Week Three - Day One

The headset came in Monday, today being Tuesday. Getting a good signal was difficult and required some effort of putting more saline solution on the electrodes and weaving through masses of hair. Although once the signal turned green (meaning good signal), then we could mess around with the Expressiv, Affectiv, and Cognitiv Suite. 
The Expressiv Suite displayed the robot face mimicking our facial expressions and we could train expressions such as smiling and raising an eyebrow. 
The Affectiv Suite displayed the signals of our emotions. Once the user was frustrated, the frustrated signal would shoot up to the top of the chart. The signals would oscillate as well to show that it's all displaying in real time. We tried calming down to increase the meditation signal and decreasing the frustrated emotion. 
The Cognitiv Suite was where we spent most of the time. We found that training the neutral was most difficult than we imagined because we couldn't think about anything and had to stay calm. Although once you learn an action to use on the displayed cube, returning to the neutral state became difficult because you would get too involved or immersed into the action and therefore, stepping away and relaxing helps reduce the mind of mental fatigue. Once the user starts adding more actions (only up to 4 actions at a time), the difficult increases to a cap of "Expert Only!" showing that the more actions you try to think at once, the harder it is to control one specific action. The mind gets jumbled up in all the thoughts and making distinct actions isn't easy to control.
The skill rating bar on the side displays the percentage of that action. The skill rating shows how consistently one focuses on the mental thought during training to determine the level of focus and concentration.

Now, I must work on an analysis of the data using Python as another project. The goal is to take all the different signals from the 14 - 16 electrodes while the user, who is wearing the Emotiv EPOC headset, is reading or playing a game, to determine what the action is. To do this, I will be studying parts of Fourier Analysis to understand how to deduce those signals into a simpler reading. 

Sunday, June 22, 2014

Week Two - Day Two

Action potential in the brain is when ions are released outside the cell. When a bunch of ions are released and create a wave at the electrode (through repelling away from each other), the electrodes become polarized, thus allowing the voltage to be measured. Electrodes may be spaced out around the scalp for the reason to capture all neuron activity. 

Theta waves have a frequency range between 4 Hz and 7 Hz and represent abnormal activity, along with reports of relaxed, meditative, and creative states. The color purple is the shortest wavelength and has a frequency of 7.5, the correlation being that the color purple relaxes you. 

Eye tracking and EEG are linked together sometimes to improve the reading of which areas of the brain light up when engaged in cognitive tasks

Action power in the Emotiv Control Panel for the EmotivSDK, shows the certainty in which the user is hitting the area in which they are supposed to activate for the cognitive task. The more action power, the stronger the signal and connection to that area of the brain meaning the concentration is better and the block is able to do that certain action longer.

Emotiv API functions that modify or retrieve EmoEngine settings are prefixed with "EE_." which is why most of the coding I've seen relating to the Emotiv EPOC has had the prefix "EE_" for example (EE_EmoEngineEventCreate). Expressive commands or actions are prefixed with "EE_Expressiv". The (trained_sig) command works with the (EE_ExpressivGetTrainedSignatureAvailable( ) ) which learns the trained action such as smile or eyebrow. Although the eyelid-related expressions (Blink, Wink, Look Left, and Look Right) cannot be trained.

Powerpoint explaining the Emotiv EPOC each suite as well as a summary the first two weeks is down below.

Tuesday, June 17, 2014

Week Two - Day One

Starting to touch upon the Emotiv EPOC C++ coding and learning how it works. (EE_EmoEngineEventGetEmoState( ) ) and (EE_EmoStateUpdated) are used to retrieve the updated EmoState. When starting to connect to the EmoEngine, always have a case in which you test:

if (EE_EngineConnect( ) != EDK_OK) 
{
     throw exception ("Emotiv Engine start up failed.");
}
break;

Buffers use memory temporarily to store information, whether an output or input, while the data is being transferred.

It's also important to always check for other error codes that would be returned from (EE_EngineGetNextEvent( ) ) such as if no events were published previously. At the end of the program, there must be a line that says (EE_EngineDisconnect( ) ) to terminate the connection and free up the resources. (EE_EmoStateFree( )  and EE_EmoEngineEventFree( ) ) also free up memory from the buffers (see above). 

Skipping to (page 55) on the Emotiv SDK manual. The Cognitiv Demo explains how the output of the Cognitiv detection shows whether users are mentally engaged at a given time. 

C++ static_cast< > ( ) is usually for when converting an integer into a double in the language, or giving the variable a type. 

Up to 4 distinct actions may be distinguished using Cognitiv. Also, to maintain the quality of the EEG signal without interference, relaxing the face and neck is required as well as refraining from moving. Possibly being relaxed will help someone focus more and improve the EEG signal. When a game is too difficult, many players choose to rage quit or tense up the muscles on their face, contorting their face into strange expressions. I believe this is the same reason why players try to tilt their controllers to the right when turning right in a racing game. Believing that the movement of their physical body will aid in their progression at the task at hand. Being stressed during a game is probably counteractive because the players lack of focus goes down. 

Today, we talked about implementing the learning process by recording the previous recording of the game and seeing where the block lands. (Because the block is being moved forward by the player thinking up and whenever the player loses focus, the block will fall). Next time, when the block is about to land where it is predicted to land, maybe soft music can gradually turn on as they get closer or the game generates a gradient color change that relaxes their brain allowing for less distractions and more time to think about getting past their first hurdle. 

According to http://www.huffingtonpost.com/2011/09/26/how-color-can-help-you work_n_982043.html, when someone is feeling unmotivated, the color purple will calm and rejuvenate the person into feeling motivated to continue working again. Purple has been used for healing and meditation and beneficial to pushing past mental blocks. Therefore in the game, maybe when it gets closer to the previous dropping point, the background could fade to purple.

I saw Linux as open source today because while installing the Emotiv SDKLite, it asked me for a password but I wasn't supposed to need one because it was being installed locally and therefore, one of them went into the code and actually commented the sudo part out which I didn't think was possible but I knew Linux was open source. I just didn't exactly understand the premise of open source. Which is that you can change any code and add or take away commands from programs to essentially make it how you want it.

GitHub, a repository for codes and sources where people can fork codes and pull request them to share and update various codes around the world. This way, one programmer who creates code can share it with the world so that others can import their code without copying and pasting it.

EmoComposer and Emotiv Control Panel both working together. Must first connect the Emotiv Control Panel to the EmoComposer before using them. (Select Connect tab and then press To EmoComposer...)


*Good Posture = Good Presence*

Monday, June 16, 2014

Week One - Day Two

Today was a day in which I just focused on learning C++ and C# so that I could implement C++ code into C#. The program that they are using for the game design is called Unity and the EEG they are ordering is called the Emotiv EPOC headset. Using the website, https://emotiv.com/epoc/, I've learned that the Cognitiv Suite interprets the users thoughts and feelings which is what they wish to record so that what ever the player is thinking, will get translated into the game. A prototype game, a simpler game, is currently being produced so that the Emotiv EPOC can be tested once it comes in. My job was to find the Emotiv EPOC library and translate that code into the C# code that is required for Unity.

The Emotive EPOC software development kit will be used to access the programming of the headset and I must bind the code of C++ to C# so that both translates well. http://emotiv.com/developer/SDK/UserManual.pdf (page 43) introduces the Emotiv API and Emotiv EmoEngine which I need to look into more. 

C++ has been slightly different from JAVA which I have touched upon before. The print lines require (std: cout) but you could always just universalize std and using a namespace, (using namespace std;) and therefore, the std is not required for the rest of the program and only cout is required. (<<) means that the code to the right is being printed or displayed and (>>) allows storage of an integer or string to the variable on the left.
YSP 2014 IRP (Individual Research Project)
Week One - Day One

This 6 week journey will allow me to learn about the Hierarchical Temporal Memory, the Cortical Learning Algorithm, Emotiv EPOC, and game design with Unity.

Hierarchical Temporal Memory ... very new to me. The people I was working under explained to me that the brain was made of different layers that filter information into the brain. That when we receive information through any of our senses, for our brain to access all the information to it's full potential, we must go through processes to analyze the data collected. Therefore, say you see a person's face (this was one of their examples), then you first recognize the eyes and mouth as pieces of a whole, then putting it together to make a face, next would be to recognize that it is a face of a person you know, (say John), lastly realizing that it is your friend who you were excited to meet that day.

The two people who were developing a way to implement this type of learning into an educational game were thinking of creating an algorithm that could measure the voltage within the brain to determine how the brain is learning and adjusting the game to improve the player's learning.