LANE McINTOSH

Bringing insights from the brain to machine learning.
Theoretical neuroscientist at Stanford.

About Me

In a fraction of a second our brain captures information about our world and effortlessly extracts meaning from this sensory information. What exactly are the computations these neural circuits are performing? Are there general principles for understanding how neural systems are processing information, given the input statistics for instance? And can the answers to these questions inform how we build artificially intelligent systems like deep learning networks?

I am a PhD candidate at Stanford where I do a mix of theory and experimental tests of theory in order to get at these questions. My PhD research focuses on predictive inference and coding in the early stages of vision, and brings together tools from machine learning, information theory, high-dimensional data analysis, and nonequilibrium physics.

Curriculum Vitae




Timeline

  • 2012-Present

    Stanford University
    Ph.D. Neuroscience
    Ph.D. Minor Computer Science

    Advisors: Baccus and Ganguli Labs
    NVIDIA Best Poster Award, SCIEN 2015
    Top 10% Poster Award, CS231N CNNs
    Mind, Brain, and Computation Traineeship
    NSF IGERT Graduate Fellowship

  • 2010-2012

    University of Hawaii
    M.A. Mathematics

    Advisor: Susanne Still, Machine Learning Group
    Departmental Merit Award
    NSF SUPER-M Graduate Fellowship
    Kotaro Kodama Scholarship
    Graduate Teaching Fellowship

  • 2006-2010

    University of Chicago
    B.A. Computational Neuroscience

    Research: MacLean Comp. Neuroscience Lab
    Research: Dept. of Economics Neuroecon. Group
    Research: Gallo Memory Lab
    Lerman-Neubauer Junior Teaching Fellowship
    NIH Neuroscience and Neuroengineering Fellowship
    Innovative Funding Strategy Award

  • 2009

    Institute for Advanced Study
    Undergraduate Research Fellow

    Bioinformatics research at Simons Center for Systems Biology in Princeton, NJ

  • Past-2006

    Originally from
    San Diego

    Valedictorian
    Bank of America Mathematics Award
    President's Gold Educational Excellence Award
    California Scholarship Federation Gold Seal
    Advanced Placement Scholar with Distinction

Projects



Deep Learning Models of the Retina

The retina consists of three layers of cells that form the first stage of all vertebrate visual processing. Deep convolutional neural networks demonstrate success at many image and pattern recognition tasks, but can these models capture the computations used in biological visual pathways when viewing natural movies? We demonstrate that these deep learning models are considerably more accurate than pre-existing models and generalize significantly better across stimuli classes. We furthermore probe the model using structured stimuli to reveal nonlinear computations important for biological vision.
Lane McIntosh*, Niru Maheswaranathan*, Aran Nayebi, Surya Ganguli, Stephen Baccus
NVIDIA Best Poster, SCIEN Industry Affiliates Meeting (image processing), 2015
Top 10% Poster Award, CS231n Convolutional Neural Networks, 2015
In Preparation, 2015


SCIEN Poster PDF CS231n PDF CS231n Poster Github



Multiple Spatial Scales of Inhibition Improve Information Transmission in the Retina

Retinal ganglion cells, the bottleneck of all visual information to the brain, have linear response properties that appear to maximize information between the visual world and the retinal ganglion cell responses, subject to a variance constraint. In this paper I contribute a new theoretical finding that generating the ganglion cells' linear receptive field from inhibitory interneurons with disparate spatial scales provides a basis that allows the receptive field to maximize information under a wide range of environments whose signal-to-noise ratios vary by orders of magnitude.
Mihai Manu*, Lane McIntosh*, David Kastner, Benjamin Naecker, and Stephen Baccus
In Preparation, 2015


SfN 2015 Poster Github



Video-based Event Recognition

How can we automatically extract events from video? We used a database of surveillance videos and examined the performance of SVMs and Convolutional Neural Networks in detecting events like people getting in and out of cars.
Ian Ballard* and Lane McIntosh*
CS221 Artificial Intelligence Poster, 2014


PDF Poster



Learning Predictive Filters

How should an intelligent system intent on only keeping information predictive about the future filter its data? We analytically find the optimal predictive filter for Gaussian input using recent theorems from the information bottleneck literature. Using numerical methods, we then show the resemblance of these optimally predictive filters to the receptive fields in early visual pathways of vertebrates.
Lane McIntosh
CS229 Machine Learning Poster, 2013


PDF Poster



Thermodynamics of Prediction in Model Neurons

Recent theorems in nonequlibrium thermodynamics show that information processing inefficiency provides a lower bound for energy dissipation in certain systems. We extend these results to model neurons and find that adapting neurons that match the timescale of their inputs perform predictive inference while minimizing energy inefficiency.
Lane McIntosh and Susanne Still
Master's Thesis, 2012


PDF Github

Teaching



Convolutional Neural Networks





CS 231n Convolutional Neural Networks

Stanford University, Winter 2016. Teaching assistant for this class on convolutional neural networks taught by Fei-Fei Li, Andrej Karpathy, and Justin Johnson. Throughout the class students learn how to derive gradients for large computational graphs, implement, train, and debug their own neural networks, and gain an understanding of recent developments in deep learning. 330 students enrolled.

Math Tools For Neuroscience



Math Tools for Neuroscience

Stanford University, Winter 2016, Spring 2015. Co-taught this class with fellow graduate student Kiah Hardcastle, and covered a wide variety of useful mathematical tools including dimensionality reduction, Fourier transforms, dynamical systems, statistics, information theory, and Bayesian probability. Mostly graduate student and postdoctoral audience.

Intro to Perception



ExploreCourses Listing

Stanford University, Fall 2015, 2014. Teaching assistant for this introductory undergraduate course surveying the literature on perception from the retina to high-level cortex and behavioral experiments.

Precalculus



Precalculus Course Website

University of Hawaii, 2010-12. First a teaching assistant, then lecturer, for this large undergraduate introductory mathematics course.

Biophysics and Chemical Biology


University of Chicago, Spring 2008. Teaching assistant for the third course in the advanced-track biology sequence for students who scored 5/5 on their AP Biology test. This course focused on how to read original research papers in biophysics and chemical biology, with weekly presentations.