In a fraction of a second our brain captures information about our world and effortlessly extracts meaning from this sensory information. What exactly are the computations these neural circuits are performing, and can the answers to these questions inform how we build artificially intelligent systems like deep learning networks?
I am a PhD candidate at Stanford where I do a mix of theory and experimental tests of theory in order to get at these questions. My PhD research focuses on deep learning and theoretical neuroscience, and looks to the brain for inspiration on how to move computer vision from relatively clean, standardized benchmarks to the real world, where noise, time, and efficiency become pressing issues.
Artificial intelligence research in computer vision.
Publication: Recurrent segmentation for variable computational budgets
Advisor: Susanne Still, Machine Learning Group
Departmental Merit Award
NSF SUPER-M Graduate Fellowship
Kotaro Kodama Scholarship
Graduate Teaching Fellowship
Research: MacLean Comp. Neuroscience Lab
Research: Dept. of Economics Neuroecon. Group
Research: Gallo Memory Lab
Lerman-Neubauer Junior Teaching Fellowship
NIH Neuroscience and Neuroengineering Fellowship
Innovative Funding Strategy Award
Bioinformatics research at Simons Center for Systems Biology in Princeton, NJ
Bank of America Mathematics Award
President's Gold Educational Excellence Award
California Scholarship Federation Gold Seal
Advanced Placement Scholar with Distinction
A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In neural circuits, ubiquitous nonlinear processes present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. We demonstrate that deep convolutional neural networks capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than previous models. We are then able to probe the learned models to gain insights about the retina, for instance how it compresses natural scenes efficiently through feedforward inhibition and how it transforms potentially large sources of extrinsic and intrinsic noise into sub-Poisson variability. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.
Lane McIntosh*, Niru Maheswaranathan*, Aran Nayebi, Surya Ganguli, Stephen Baccus
Accepted Paper, Advances in Neural Information Processing Systems (NIPS), 2016
Accepted Talk, Society for Neuroscience, 2016
Accepted Poster, Computational and Systems Neuroscience (COSYNE), 2016
NVIDIA Best Poster, SCIEN Industry Affiliates Meeting (image processing), 2015
Top 10% Poster Award, CS231n Convolutional Neural Networks, 2015
Retinal ganglion cells, the bottleneck of all visual information to the brain, have linear
response properties that appear to maximize information between the visual world and
the retinal ganglion cell responses, subject to a variance constraint. In this paper I contribute
a new theoretical finding that generating the ganglion cells' linear receptive field from
inhibitory interneurons with disparate spatial scales provides a basis that allows the receptive
field to maximize information under a wide range of environments whose signal-to-noise
ratios vary by orders of magnitude.
Mihai Manu*, Lane McIntosh*, David Kastner, Benjamin Naecker, and Stephen Baccus
In Review, Nature Neuroscience 2017
How can we automatically extract events from video? We used a database of surveillance videos and examined the performance of SVMs and Convolutional Neural Networks in detecting events like people getting in and out of cars.
Ian Ballard* and Lane McIntosh*
CS221 Artificial Intelligence Poster, 2014
How should an intelligent system intent on only keeping information predictive about the future filter its data? We analytically find the optimal predictive filter for Gaussian input using recent theorems from the information bottleneck literature. Using numerical methods, we then show the resemblance of these optimally predictive filters to the receptive fields in early visual pathways of vertebrates.
CS229 Machine Learning Poster, 2013
Recent theorems in nonequlibrium thermodynamics show that information processing inefficiency provides a lower bound for energy dissipation in certain systems. We extend these results to model neurons and find that adapting neurons that match the timescale of their inputs perform predictive inference while minimizing energy inefficiency.
Lane McIntosh and Susanne Still
Master's Thesis, 2012
Stanford University, Winter 2016 and 2017. Teaching assistant for this class on convolutional neural networks taught by Fei-Fei Li, Andrej Karpathy, Justin Johnson and Serena Yeung. Throughout the class students learn how to derive gradients for large computational graphs, implement, train, and debug their own neural networks, and gain an understanding of recent developments in deep learning. Over 600 students enrolled in 2017.
Stanford University, Winter 2017 and 2016, Spring 2015. Co-taught this class with fellow graduate student Kiah Hardcastle, and covered a wide variety of useful mathematical tools including dimensionality reduction, Fourier transforms, dynamical systems, statistics, information theory, and Bayesian probability. Mostly graduate student and postdoctoral audience.
Stanford University, Fall 2015, 2014. Teaching assistant for this introductory undergraduate course surveying the literature on perception from the retina to high-level cortex and behavioral experiments.
University of Hawaii, 2010-12. First a teaching assistant, then lecturer, for this large undergraduate introductory mathematics course.
University of Chicago, Spring 2008. Teaching assistant for the third course in the advanced-track biology sequence for students who scored 5/5 on their AP Biology test. This course focused on how to read original research papers in biophysics and chemical biology, with weekly presentations.