본문 바로가기
_Enzaim Work

Seeing what the brain sees

by 알 수 없는 사용자 2011. 10. 14.


Ok...This isn't completely on topic, but it's just too cool for me not to post about. Take a look at the video below. I'll explain it briefly first, and then go into more detail after you've seen it.

The video is actually two vids side-by-side. On the left side is a part of a movie trailer. That's not so interesting or important.  On the right side is the interesting clip.  Researchers had subjects watch the clip on the left and scanned their brains while they doing so. After that, they took the scan results and using a computer, reconstructed the video they watched  from only the scan results.  Now have a look:

Amazing isn't it?  Granted, the reconstructed video isn't identical to the original, but it's pretty good. So, how was it done?  Here's a short summary of what the researchers did:

1) They had subjects watch many movie trailers (not the one in the vid) and scanned their brains while doing so. The scanned the brains using a technique called functional MRI (fMRI), which measures blood flow and can be used to estimate brain activity.

2) They had a computer analyze the scans at thousands of different points in the visual cortex of the brain.  The goal was to determine how the brain responds to different visual inputs. For each of those thousands of points, they created a "dictionary" of its activity in response to various stimuli.

3) After they created the dictionaries, they tested them. First, they scanned subjects' brains while they watched other movie trailers (again not the one in the vid). Then they had the computer analyze the movie trailers and predict what it thought what the brain activity would look like. They then compared the real scans to the computer predictions, which were confirmed to be similar.

4) They then created a library of about 18 million random one second clips from youtube. Each of the clips were run through the computer prediction process.

5) Subjects were then shown the video in the Youtube clip and their brains were scanned. The computer took the scan results and compared it to its library of 18 million one second clips.  For each one second segment of the scan, the computer picked the 100 one second clips that were predicted to be closest to the real scan and averaged them together, creating the video you see on the right.




We are stepping closer and closer to understanding how the brain works. The technique now isn't all that useful, but imagine what it could lead to.  Imagine a day when the interpretation process is much more refined and the reconstructed videos are much more accurate.  We could see what people are imagining or dreaming. Or we could use a similar process to identify words and people could think something, have a computer scan and interpret it, and send it to a recipient, enabling, in a sense, telepathy.  Of course, we are still a long way away from anything like that, and who knows if the technical hurdles prove to be too much, but it is always exciting to think about.  We have already created ways for monkeys to control virtual and real robotic arms with their thoughts, so maybe it isn't so far off.