Learning about brains from computers, and vice versa
For many years, Tomaso Poggio鈥檚 lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities of computers to perform tasks that our brains do with ease, such as making sense of complex visual images.
But recently Poggio has found that the work has progressed so far, and the two tasks have begun to overlap to such a degree, that it鈥檚 now time to combine the two lines of research.
He鈥檒l describe his lab鈥檚 change in approach, and the research that led up to it, at the American Association for the Advancement of Science annual meeting in Boston, on Saturday, Feb. 16.
The turning point came last year, when Poggio and his team were working on a computer model designed to figure out how the brain processes certain kinds of visual information. As a test of the vision theory they were developing, they tried using the model vision system to actually interpret a series of photographs. Although the model had not been developed for that purpose鈥攊t was just supposed to be a theoretical analysis of how certain pathways in the brain work鈥攊t turned out to be as good as, or even better than, the best existing computer-vision systems, and as good as humans, at rapidly recognizing certain kinds of complex scenes.
鈥淭his is the first time a model has been able to reproduce human behavior on that kind of task,鈥 says Poggio, the Eugene McDermott Professor in MIT鈥檚 Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory.
As a result, 鈥淢y perspective changed in a dramatic way,鈥 Poggio says. 鈥淚t meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible.鈥
Get free science updates with Science X Daily and Weekly Newsletters 鈥 to customize your preferences!
The experiments involved a task that is easy for people, but very hard for computer vision systems: recognizing whether or not there were any animals present in photos that ranged from relatively simple close-ups to complex landscapes with a great variety of detail. It鈥檚 a very complex task, since 鈥渁nimals鈥 can include anything from snakes to butterflies to cattle, against a background that might include distracting trees or buildings. People were shown the scenes for just a fraction of a second, a task that uses a particular part of the human visual cortex, known as the Ventral 1 pathway, to recognize what is seen.
The visual cortex is a large part of the brain鈥檚 processing system, and one of the most complex, so reaching an understanding of how it works could be a significant step toward understanding how the whole brain works鈥攐ne of the greatest problems in science today.
鈥淐omputational models are beginning to provide powerful new insights into the key problem of how the brain works,鈥 says Poggio, who is also co-director of the Center for Biological and Computational Learning and an investigator at the McGovern Institute for Brain Research at MIT.
Although the model Poggio and his team developed produces surprisingly good results, 鈥渨e do not quite understand why the model works as well as it does,鈥 he says. They are now working on developing a comprehensive theory of vision that can account for these and other recent results from the lab.
鈥淥ur visual abilities are computationally amazing, and we are still far from imitating them with computers,鈥 Poggio says. But the new work shows that it may be time for researchers in artificial intelligence to start paying close attention to the latest developments in neuroscience, he says.
Source: Massachusetts Institute of Technology