LANE McINTOSH

Bringing insights from the brain to machine learning.
Theoretical neuroscientist at Stanford.

About Me

In a fraction of a second our brain captures information about our world and effortlessly extracts meaning from this sensory information. What exactly are the computations these neural circuits are performing, and can the answers to these questions inform how we build artificially intelligent systems like deep learning networks?

I am a PhD candidate at Stanford where I do a mix of theory and experimental tests of theory in order to get at these questions. My PhD research focuses on deep learning and theoretical neuroscience, and looks to the brain for inspiration on how to move computer vision from relatively clean, standardized benchmarks to the real world, where noise, time, and efficiency become pressing issues.

Curriculum Vitae




Timeline

  • 2012-Present

    Stanford University
    Ph.D. Neuroscience
    Ph.D. Minor Computer Science

    Advisors: Steve Baccus and Surya Ganguli
    NVIDIA Best Poster Award, SCIEN 2015
    Top 10% Poster Award, CS231N CNNs
    Ruth L. Kirschstein National Research Service Award
    Mind, Brain, and Computation Traineeship
    NSF IGERT Graduate Fellowship

  • 2010-2012

    University of Hawaii
    M.A. Mathematics

    Advisor: Susanne Still, Machine Learning Group
    Departmental Merit Award
    NSF SUPER-M Graduate Fellowship
    Kotaro Kodama Scholarship
    Graduate Teaching Fellowship

  • 2006-2010

    University of Chicago
    B.A. Computational Neuroscience

    Research: MacLean Comp. Neuroscience Lab
    Research: Dept. of Economics Neuroecon. Group
    Research: Gallo Memory Lab
    Lerman-Neubauer Junior Teaching Fellowship
    NIH Neuroscience and Neuroengineering Fellowship
    Innovative Funding Strategy Award

  • 2009

    Institute for Advanced Study
    Undergraduate Research Fellow

    Bioinformatics research at Simons Center for Systems Biology in Princeton, NJ

  • Past-2006

    Originally from
    San Diego

    Valedictorian
    Bank of America Mathematics Award
    President's Gold Educational Excellence Award
    California Scholarship Federation Gold Seal
    Advanced Placement Scholar with Distinction

Projects



Deep Learning Models of the Retina

A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In neural circuits, ubiquitous nonlinear processes present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. We demonstrate that deep convolutional neural networks capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than previous models. We are then able to probe the learned models to gain insights about the retina, for instance how it compresses natural scenes efficiently through feedforward inhibition and how it transforms potentially large sources of extrinsic and intrinsic noise into sub-Poisson variability. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.
Lane McIntosh*, Niru Maheswaranathan*, Aran Nayebi, Surya Ganguli, Stephen Baccus
Accepted Paper, Advances in Neural Information Processing Systems (NIPS), 2016
Accepted Talk, Society for Neuroscience, 2016
Accepted Poster, Computational and Systems Neuroscience (COSYNE), 2016
NVIDIA Best Poster, SCIEN Industry Affiliates Meeting (image processing), 2015
Top 10% Poster Award, CS231n Convolutional Neural Networks, 2015

NIPS 2016 paper COSYNE 2016 Poster

Stanford MBC talk IEEE talk



Multiple Spatial Scales of Inhibition Improve Information Transmission in the Retina

Retinal ganglion cells, the bottleneck of all visual information to the brain, have linear response properties that appear to maximize information between the visual world and the retinal ganglion cell responses, subject to a variance constraint. In this paper I contribute a new theoretical finding that generating the ganglion cells' linear receptive field from inhibitory interneurons with disparate spatial scales provides a basis that allows the receptive field to maximize information under a wide range of environments whose signal-to-noise ratios vary by orders of magnitude.
Mihai Manu*, Lane McIntosh*, David Kastner, Benjamin Naecker, and Stephen Baccus
In Preparation, 2015


SfN 2015 Poster Github



Video-based Event Recognition

How can we automatically extract events from video? We used a database of surveillance videos and examined the performance of SVMs and Convolutional Neural Networks in detecting events like people getting in and out of cars.
Ian Ballard* and Lane McIntosh*
CS221 Artificial Intelligence Poster, 2014


PDF Poster



Learning Predictive Filters

How should an intelligent system intent on only keeping information predictive about the future filter its data? We analytically find the optimal predictive filter for Gaussian input using recent theorems from the information bottleneck literature. Using numerical methods, we then show the resemblance of these optimally predictive filters to the receptive fields in early visual pathways of vertebrates.
Lane McIntosh
CS229 Machine Learning Poster, 2013


PDF Poster



Thermodynamics of Prediction in Model Neurons

Recent theorems in nonequlibrium thermodynamics show that information processing inefficiency provides a lower bound for energy dissipation in certain systems. We extend these results to model neurons and find that adapting neurons that match the timescale of their inputs perform predictive inference while minimizing energy inefficiency.
Lane McIntosh and Susanne Still
Master's Thesis, 2012


PDF Github

Teaching



Convolutional Neural Networks





CS 231n Convolutional Neural Networks

Stanford University, Winter 2016. Teaching assistant for this class on convolutional neural networks taught by Fei-Fei Li, Andrej Karpathy, and Justin Johnson. Throughout the class students learn how to derive gradients for large computational graphs, implement, train, and debug their own neural networks, and gain an understanding of recent developments in deep learning. 330 students enrolled.

Math Tools For Neuroscience



Math Tools for Neuroscience

Stanford University, Winter 2016, Spring 2015. Co-taught this class with fellow graduate student Kiah Hardcastle, and covered a wide variety of useful mathematical tools including dimensionality reduction, Fourier transforms, dynamical systems, statistics, information theory, and Bayesian probability. Mostly graduate student and postdoctoral audience.

Intro to Perception



ExploreCourses Listing

Stanford University, Fall 2015, 2014. Teaching assistant for this introductory undergraduate course surveying the literature on perception from the retina to high-level cortex and behavioral experiments.

Precalculus



Precalculus Course Website

University of Hawaii, 2010-12. First a teaching assistant, then lecturer, for this large undergraduate introductory mathematics course.

Biophysics and Chemical Biology


University of Chicago, Spring 2008. Teaching assistant for the third course in the advanced-track biology sequence for students who scored 5/5 on their AP Biology test. This course focused on how to read original research papers in biophysics and chemical biology, with weekly presentations.