NYUEast China Normal UniversityNYU Shanghai
mobile

Using AI-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization

Using AI-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization
Topic
Using Artificial-Intelligence-Driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization
Speaker
Daniel Yamins, McGovern Institute, MIT
Monday, July 18, 2016 - 16:00-17:30
Room 1504, NYU Shanghai | 1555 Century Avenue, Pudong New Area, Shanghai

 RSVP Here 

 

Human behavior is founded on the ability to identify meaningful entities in complex noisy data streams that constantly bombard the senses.  For example, in vision, retinal input is transformed into rich object-based scenes; in audition, sound waves are transformed into words and sentences.  In this talk, I will describe my work using computational models to help uncover how sensory cortex accomplishes these enormous computational feats.
 
The core observation underlying my work is that optimizing neural networks to solve challenging real-world artificial intelligence (AI) tasks can yield predictive models of the cortical neurons that support these tasks.  I will first describe how we leveraged recent advances in AI to train a neural network that approaches human-level performance on a challenging visual object recognition task.  Critically, even though this network was not explicitly fit to neural data, it is nonetheless predictive of neural response patterns of neurons in multiple areas of the visual pathway, including higher cortical areas that have long resisted modeling attempts.   Intriguingly, an analogous approach turns out be helpful for studying audition, where we recently found that neural networks optimized for word recognition and speaker identification tasks naturally predict responses in human auditory cortex to a wide spectrum of natural sound stimuli, and help differentiate poorly understood non-primary auditory cortical regions.  Together, these findings suggest the beginnings of a general approach to understanding sensory processing the brain.

I'll give an overview of these results, explain how they fit into the historical trajectory of AI and computational neuroscience, and discuss future questions of great interest that may benefit from a similar approach. 

Daniel Yamins, McGovern Institute, MIT

I'm a computational neuroscientist at MIT's Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research. I'll be starting as an Assistant Professor at Stanford University in September 2016. I'll be at the Stanford Neuroscience Institute and in the departments of Psychology and Computer Science. I work on science and technology challenges at the intersection of neuroscience, artificial intelligence,  psychology and large-scale data analysis. 
 
The brain is the embodiment of the most beautiful algorithms ever written.  My research goal is to "reverse engineer" these algorithms, both to learn both about how our minds work and build more effective artificial intelligence systems. 
I also like: (a) bonsai trees, (b) playing the pipe organ, (c) traveling in Asia, and (d) history.  

Location & Details

Transportation Tips:

  • Taxi card
  • Metro: Century Avenue Station, Metro Lines 2/4/6/9 Exit 6 in location B
  • Shuttle bus:
    From Zhongbei Campus, Click here
    From ECNU Minhang Campus, Click here