Imagine a reality where computers can visualize what you are thinking.
Sound far out? the item’s at This kind of point closer to becoming a reality thanks to four scientists at Kyoto University in Kyoto, Japan. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima along with Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv.
Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) along with generate visualizations of what a person is usually thinking when referring to simple, binary images like black along with white letters or simple geographic shapes (as shown in Figure 2 here).
although the scientists coming from Kyoto developed fresh techniques of “decoding” thoughts using deep neural networks (artificial intelligence). The fresh technique allows the scientists to decode more sophisticated “hierarchical” images, which have multiple layers of coloring along with structure, like a picture of a bird or a man wearing a cowboy hat, for example.
Deep image reconstruction: Natural images (seen images), GIF type
“We have been studying methods to reconstruct or recreate an image a person is usually seeing just by looking at the person’s brain activity,” Kamitani, one of the scientists, tells CNBC Make the item. “Our previous method was to assume which an image consists of pixels or simple shapes. although the item’s known which our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”
along with the fresh AI research allows computers to detect objects, not just binary pixels. “These neural networks or AI style can be used as a proxy for the hierarchical structure of the human brain,” Kamitani says.
Deep image reconstruction: Visual imagery, GIF type
For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes along with alphabetical letters for varying lengths of time.
In some instances, brain activity was measured while a subject was looking at one of 25 images. In additional cases, the item was logged afterward, when subjects were asked to think of the image they were previously shown.
Once the brain activity was scanned, a computer reverse-engineered (or “decoded”) the information to generate visualizations of a subjects’ thoughts.
The flowchart, embedded below, is usually made by the research team at the Kamitani Lab at Kyoto University along with breaks down the science of how a visualization is usually “decoded.”