From research done at 
UC Berkeley:
Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.
The primary link is too busy to access this morning, but this explanation is offered at the YouTube site:
The left clip is a segment of the movie that the subject viewed while in
 the magnet. The right clip shows the reconstruction of this movie from 
brain activity measured using fMRI. The reconstruction was obtained 
using only each subject's brain activity and a library of 18 million 
seconds of random YouTube video. (In brief, the algorithm processes each
 of the 18 million clips through the brain model, and identifies the 
clips that would have produced brain activity as similar to the measured
 brain activity as possible. The clips used to fit the model, those used
 to test the model and those used to reconstruct the stimulus were 
entirely separate.) Brain activity was sampled every one second, and 
each one-second section of the viewed movie was reconstructed 
separately.  
Via Neatorama.
 
No comments on this? To me this is fascinating. I think of early generation digital cameras and their pixelated images. If this technology gets refined like cameras were (maybe not as quickly) this would be like electronic mind-reading. Think of the possibilities. "locked in syndrome" is the first thing that comes to mind. Wow, this is sci fi come to reality. Just add prognostication and you've got "Minority Report"
ReplyDeleteWouldn´t dream recording be possible when this technology gets more advanced?
ReplyDeleteWon't be long before they match exactly. At that point we'll have bionic brains. Makes me think that someday we won't need our biolical bodies any more and we will live entirely inside silicon electronic circuits. It's a long way off, but that's the logical result of all this.
ReplyDelete