Tuesday, July 15, 2014

Week Six - Day One

The poster is mainly completed except one part which was the addition of the graphs that showed the frequencies after the Fast Fourier Transform. The Python program was finished being adjusted to produce graphs that would display nicely. Before the graphs had axis's that would clump together and the graphs would all overlap each other. This was fixed by creating labels for the x-axis and y-axis to show the intervals. By coding the minimum value and maximum value, the intervals could be extracted by taking the difference between the two and finding every fourth one in that interval. This way all the numbers would be spaced out evenly and could easily adjust itself, which it wouldn't be able to do if it was hard coded instead. 

By coding x = per[0][100:], a 2D matrix is created which says that the first row is the x-axis and their data points and then only the digits from 100 and beyond are plotted. This was done because when first graphing the points extracted from the Emotiv EPOC. there was possibly white noise and that created a spike that didn't have too much importance. This was called High-Pass Filter in which you filter out the high frequencies that distract from the low frequencies. Also, the frequency was changed to 128 which meant that the Emotiv EPOC recorded 128 brain signals in a second. That allowed the frequency (x-axis) to be changed from an interval of 0.5 to an interval of 70. That shows the full frequency and therefore we could see a small peak at 60 Hz. The small peak could be seen at both the FC5 and F8 which are near the frontal lobe and 60 Hz shows delta waves that mean concentration or focus which would make sense for the action of reading. Another difference found would be that the high peaks are found at the low frequencies for reading and found in the middle frequencies between 30 Hz and 40 Hz for watching cartoons. The band width frequencies show which waves (alpha, delta, theta, etc.) are active and that can be used to determine the action being done. 




*Signal Processing with Neuroscience

Thursday, July 10, 2014

Week Five - Day Two



Talked about the poster and at first, our poster was blocked off with text and very structured with 3 by 3 square cubes. The people working with us explained how sometimes it's okay to make research posters consisting mainly of text but when you understand the project, it's better to have pictures and illustrations. A illustration can explain an idea in many ways that a block of text can't, but if the audience or the presenter does not fully understand the project, having text for the audience to read through can be useful.

The figure above was an illustration I drew in paint to describe my idea for the poster. I choose to make the poster itself an illustration of the process and apparently this type of poster is very useful for seminars. The reason this type of poster may be useful is because even with illustrations and graphs, the overall poster passes on the message and flows, making the path the eye follows flow around the poster. This allows every point to be covered. The name of project would be stationed in the middle and the intro in the top left corner. Then the information text would be fed into the brain and the Fast Fourier Transform would spit out the different frequencies which would contain different sentence lengths describing what my part of the project consisted of. Then the output from the Fast Fourier Transform would then be inputted into the Cortical Learning Algorithm and the text would trickle down and form at the bottom. Although the Cortical Learning Algorithm goes from the bottom and heads up, the flow of the poster leads down. After being classified by the Cortical Learning Algorithm, the output from that get's inputted into the game as the controls instead of the EmoKey. The instructions screen and game over screen would be replaced with text with a picture of the game on the side. That all leads to the finished product and the name of the over all project which is Game-Mind Interface.

During practicing to present the poster, it's good to have a few tips:
- It's okay to say "That is beyond the extent of our research/knowledge" if someone asks something you don't know
- Keep it simple at first and go into more detail if they start asking questions, give a summary and general idea in the beginning
- Talk about Neurofeedback when discussing future plans about the educational game 
Week Five - Day One

Last week was the end of working on the project fully and now we've moved onto working on the Research Paper and Poster that is required. The research paper and poster are the final key elements for the individual research projects. The whole day consisted of working on writing our research papers and working on the poster.

The Poster consisted of all three of our parts and I wrote a small paragraph on my part to be included in the poster:

'When the user wears the Emotiv EPOC headset and trains in the Expressiv or Cognitiv Suite, those specific actions generate certain brain signals. After the extraction of the brain signals from the headset, a Python program plots the function as a graph for visualization. The differentiation between all the activities requires a decomposition of the overall function. Therefore, the implementation of the Fast Fourier Transform function in Python helps reduce the function into its sine and cosine components to become the output. The Fast Fourier Transform estimates unusual function on an interval, transforming a function of time into a function of frequency. It essentially depicts the mapping between two sets of domains, therefore allowing the time domain to become altered into the frequency domain. The output then becomes passed onto the Hierarchical Temporal Memory as an input.'

For the research paper, the abstract is supposed to be written with a motivation, problem statement, approach, results, and conclusions part. The motivation is supposed to explain why we care about the issue or project. The problem statement is supposed to tell the reader what the issue at hand is and the scope of the work. The approach is how you go about solving the problem or going about working on the project. The results explain to us what the answer is or what came out of the project. The data or analysis or thoughts. The conclusion explains what the future work may be or how the project will impact the world. 
Week Four - Day Two

After collected brain waves from someone who was working along side me with another portion of the project, I am plugging the data into the python code to return and plot the Fast Fourier Transform of that signal. The three signals collected were ones where the subject spent a limited amount of time reading, solving integrals, and watching cartoons.

With the use of various python libraries such as SciPy, NumPy, and Matplotlib, visualization of the Fast Fourier Transform became more accessible. Matplotlib is a 2D plotting library that produces graphical displays of models such as line graphs or representations of data. The reason for the program was to be able to analyze the data inputted, in a simpler and more efficient fashion.

Transforming the .edf files into Ascii files utilize the ggc which is a compiler that changes the C Source Code into a ./a.out code. Which is an assembler output while the other file was a C source code. Typing ggc edf2ascii.c into the terminal allowed the .edf files to be turned into comprehensible .txt (text) tiles.


Thursday, July 3, 2014

Week Four - Day One

Today, I am planning on using Scipy, Numpy, and other python libraries to work with using the Fast Fourier Transform and extracting the spectra from the brain. As the brain thinks, there will be various signals sent out that will come in the form of some type of wave, in which I use the Fast Fourier Transform to take the sine and cosine components and turn it into a periodic function.

I've also learned a bit about matplotlib which a python library that allows one to plot graphs or charts. Using (import matplotlib.pyplot as plt), one can just write plt.plot (x, y, 'bo') and then plt.show( ) and the graph will appear when you run the code. I've realized that 'bo' means that the displayed graph will show blue dots, more specifically, on a scatterplot. 'rs' would display red squares and g^ displays green triangles. Although I've also realized that the code I was using was for scatterplots and to get graphs with a grid, plt.grid(True) had to be used.

The same code I wrote was:

from scipy.fftpack import fft, ifft, fftfreq
import matplotlib.pyplot as plt
import numpy as np

theta = 4*np.pi
t = np.linspace(0, 4, 100)
x = np.sin(theta*t)*np.exp(-5*t)
sp = fft(x)
axis = -1
freq = fftfreq(t.shape[axis])
plt.plot(freq, sp.real, freq, sp.imag)
plt.grid(True)
plt.show()

In which it displays:
















A periodogram is an estimate of distribution of data of a signal. The Boham, parzen, boxcar, and hamming are all different windows that restrict the mathematical function to a certain interval.

When using t = np.linspace(0, 4, 100), the default number of points it plots is 50 and you can change the number of points by chaning the last digit to another number, for example 100. The more points are ploted, the better the data is represented. 

Thursday, June 26, 2014

Week Three - Day Two

Today, I continued learning about Fourier Transform, which is what I started reading about between Day One and Day Two. This is to help with the python project that analysis the data collected from the EEG. The signals collected, which are just functions on a graph, will be turned into functions of frequencies to be analyzed by the python code.

Also: Found out that Emotiv EPOC only runs for 800 seconds before crashing.

The next few pieces of data are also in the PowerPoint shown below but will be summarized. Fourier Transform is linear since it is both homogeneous and additive.
For homogeneous systems, amplifying the input will likewise amplify the output. S{af(t)} = aS{f(t)}
For additive systems, the response of the sum is the sum of the responses. S{f1(t) + f2(t)} = S{f1(t)} + S{f2(t)}
All linear systems produce an output of zero when the input is zero. S{0} = 0

Euler's Identity:
e^i*2pi*t = cos(2pi*t) + isin(2pi*t)

Fourier analysis of a periodic function refers taking the sine and cosine components of the overall "weird" function and separating them into simplified pieces of a whole. An example would be if you played piano, the keys in a cord are played at the same time and therefore you hear one sound, but when you use fourier analysis, the frequencies of each key are taken into account and therefore you get three different frequencies, which means three different waves. This is what transforming a function of time into a function of frequency means. And if you take the inverse, you are transforming a function of frequency into a function of time.



Fast Fourier Transform is an algorithm to compute the Discrete Fourier Transform and its inverse. A transform is a mapping between two sets of data/domains (time domain, frequency domain, or even a space domain) The real component would be an even function and the imaginary component would be an odd function on the real (x-axis) and imaginary (y-axis) plane.

Wednesday, June 25, 2014

Week Three - Day One

The headset came in Monday, today being Tuesday. Getting a good signal was difficult and required some effort of putting more saline solution on the electrodes and weaving through masses of hair. Although once the signal turned green (meaning good signal), then we could mess around with the Expressiv, Affectiv, and Cognitiv Suite. 
The Expressiv Suite displayed the robot face mimicking our facial expressions and we could train expressions such as smiling and raising an eyebrow. 
The Affectiv Suite displayed the signals of our emotions. Once the user was frustrated, the frustrated signal would shoot up to the top of the chart. The signals would oscillate as well to show that it's all displaying in real time. We tried calming down to increase the meditation signal and decreasing the frustrated emotion. 
The Cognitiv Suite was where we spent most of the time. We found that training the neutral was most difficult than we imagined because we couldn't think about anything and had to stay calm. Although once you learn an action to use on the displayed cube, returning to the neutral state became difficult because you would get too involved or immersed into the action and therefore, stepping away and relaxing helps reduce the mind of mental fatigue. Once the user starts adding more actions (only up to 4 actions at a time), the difficult increases to a cap of "Expert Only!" showing that the more actions you try to think at once, the harder it is to control one specific action. The mind gets jumbled up in all the thoughts and making distinct actions isn't easy to control.
The skill rating bar on the side displays the percentage of that action. The skill rating shows how consistently one focuses on the mental thought during training to determine the level of focus and concentration.

Now, I must work on an analysis of the data using Python as another project. The goal is to take all the different signals from the 14 - 16 electrodes while the user, who is wearing the Emotiv EPOC headset, is reading or playing a game, to determine what the action is. To do this, I will be studying parts of Fourier Analysis to understand how to deduce those signals into a simpler reading. 

Sunday, June 22, 2014

Week Two - Day Two

Action potential in the brain is when ions are released outside the cell. When a bunch of ions are released and create a wave at the electrode (through repelling away from each other), the electrodes become polarized, thus allowing the voltage to be measured. Electrodes may be spaced out around the scalp for the reason to capture all neuron activity. 

Theta waves have a frequency range between 4 Hz and 7 Hz and represent abnormal activity, along with reports of relaxed, meditative, and creative states. The color purple is the shortest wavelength and has a frequency of 7.5, the correlation being that the color purple relaxes you. 

Eye tracking and EEG are linked together sometimes to improve the reading of which areas of the brain light up when engaged in cognitive tasks

Action power in the Emotiv Control Panel for the EmotivSDK, shows the certainty in which the user is hitting the area in which they are supposed to activate for the cognitive task. The more action power, the stronger the signal and connection to that area of the brain meaning the concentration is better and the block is able to do that certain action longer.

Emotiv API functions that modify or retrieve EmoEngine settings are prefixed with "EE_." which is why most of the coding I've seen relating to the Emotiv EPOC has had the prefix "EE_" for example (EE_EmoEngineEventCreate). Expressive commands or actions are prefixed with "EE_Expressiv". The (trained_sig) command works with the (EE_ExpressivGetTrainedSignatureAvailable( ) ) which learns the trained action such as smile or eyebrow. Although the eyelid-related expressions (Blink, Wink, Look Left, and Look Right) cannot be trained.

Powerpoint explaining the Emotiv EPOC each suite as well as a summary the first two weeks is down below.

Tuesday, June 17, 2014

Week Two - Day One

Starting to touch upon the Emotiv EPOC C++ coding and learning how it works. (EE_EmoEngineEventGetEmoState( ) ) and (EE_EmoStateUpdated) are used to retrieve the updated EmoState. When starting to connect to the EmoEngine, always have a case in which you test:

if (EE_EngineConnect( ) != EDK_OK) 
{
     throw exception ("Emotiv Engine start up failed.");
}
break;

Buffers use memory temporarily to store information, whether an output or input, while the data is being transferred.

It's also important to always check for other error codes that would be returned from (EE_EngineGetNextEvent( ) ) such as if no events were published previously. At the end of the program, there must be a line that says (EE_EngineDisconnect( ) ) to terminate the connection and free up the resources. (EE_EmoStateFree( )  and EE_EmoEngineEventFree( ) ) also free up memory from the buffers (see above). 

Skipping to (page 55) on the Emotiv SDK manual. The Cognitiv Demo explains how the output of the Cognitiv detection shows whether users are mentally engaged at a given time. 

C++ static_cast< > ( ) is usually for when converting an integer into a double in the language, or giving the variable a type. 

Up to 4 distinct actions may be distinguished using Cognitiv. Also, to maintain the quality of the EEG signal without interference, relaxing the face and neck is required as well as refraining from moving. Possibly being relaxed will help someone focus more and improve the EEG signal. When a game is too difficult, many players choose to rage quit or tense up the muscles on their face, contorting their face into strange expressions. I believe this is the same reason why players try to tilt their controllers to the right when turning right in a racing game. Believing that the movement of their physical body will aid in their progression at the task at hand. Being stressed during a game is probably counteractive because the players lack of focus goes down. 

Today, we talked about implementing the learning process by recording the previous recording of the game and seeing where the block lands. (Because the block is being moved forward by the player thinking up and whenever the player loses focus, the block will fall). Next time, when the block is about to land where it is predicted to land, maybe soft music can gradually turn on as they get closer or the game generates a gradient color change that relaxes their brain allowing for less distractions and more time to think about getting past their first hurdle. 

According to http://www.huffingtonpost.com/2011/09/26/how-color-can-help-you work_n_982043.html, when someone is feeling unmotivated, the color purple will calm and rejuvenate the person into feeling motivated to continue working again. Purple has been used for healing and meditation and beneficial to pushing past mental blocks. Therefore in the game, maybe when it gets closer to the previous dropping point, the background could fade to purple.

I saw Linux as open source today because while installing the Emotiv SDKLite, it asked me for a password but I wasn't supposed to need one because it was being installed locally and therefore, one of them went into the code and actually commented the sudo part out which I didn't think was possible but I knew Linux was open source. I just didn't exactly understand the premise of open source. Which is that you can change any code and add or take away commands from programs to essentially make it how you want it.

GitHub, a repository for codes and sources where people can fork codes and pull request them to share and update various codes around the world. This way, one programmer who creates code can share it with the world so that others can import their code without copying and pasting it.

EmoComposer and Emotiv Control Panel both working together. Must first connect the Emotiv Control Panel to the EmoComposer before using them. (Select Connect tab and then press To EmoComposer...)


*Good Posture = Good Presence*

Monday, June 16, 2014

Week One - Day Two

Today was a day in which I just focused on learning C++ and C# so that I could implement C++ code into C#. The program that they are using for the game design is called Unity and the EEG they are ordering is called the Emotiv EPOC headset. Using the website, https://emotiv.com/epoc/, I've learned that the Cognitiv Suite interprets the users thoughts and feelings which is what they wish to record so that what ever the player is thinking, will get translated into the game. A prototype game, a simpler game, is currently being produced so that the Emotiv EPOC can be tested once it comes in. My job was to find the Emotiv EPOC library and translate that code into the C# code that is required for Unity.

The Emotive EPOC software development kit will be used to access the programming of the headset and I must bind the code of C++ to C# so that both translates well. http://emotiv.com/developer/SDK/UserManual.pdf (page 43) introduces the Emotiv API and Emotiv EmoEngine which I need to look into more. 

C++ has been slightly different from JAVA which I have touched upon before. The print lines require (std: cout) but you could always just universalize std and using a namespace, (using namespace std;) and therefore, the std is not required for the rest of the program and only cout is required. (<<) means that the code to the right is being printed or displayed and (>>) allows storage of an integer or string to the variable on the left.
YSP 2014 IRP (Individual Research Project)
Week One - Day One

This 6 week journey will allow me to learn about the Hierarchical Temporal Memory, the Cortical Learning Algorithm, Emotiv EPOC, and game design with Unity.

Hierarchical Temporal Memory ... very new to me. The people I was working under explained to me that the brain was made of different layers that filter information into the brain. That when we receive information through any of our senses, for our brain to access all the information to it's full potential, we must go through processes to analyze the data collected. Therefore, say you see a person's face (this was one of their examples), then you first recognize the eyes and mouth as pieces of a whole, then putting it together to make a face, next would be to recognize that it is a face of a person you know, (say John), lastly realizing that it is your friend who you were excited to meet that day.

The two people who were developing a way to implement this type of learning into an educational game were thinking of creating an algorithm that could measure the voltage within the brain to determine how the brain is learning and adjusting the game to improve the player's learning.