In a fraction of a second our brain captures information about our world and effortlessly extracts meaning from this sensory information. What exactly are the computations these neural circuits are performing? Are there general principles for understanding how neural systems are processing information, given the input statistics for instance? And can the answers to these questions inform how we build artificially intelligent systems like deep learning networks?
I am a PhD candidate at Stanford where I do a mix of theory and experimental tests of theory in order to get at these questions. My PhD research focuses on predictive inference and coding in the early stages of vision, and brings together tools from machine learning, information theory, high-dimensional data analysis, and nonequilibrium physics.
Advisor: Susanne Still, Machine Learning Group
Departmental Merit Award
NSF SUPER-M Graduate Fellowship
Kotaro Kodama Scholarship
Graduate Teaching Fellowship
Research: MacLean Comp. Neuroscience Lab
Research: Dept. of Economics Neuroecon. Group
Research: Gallo Memory Lab
Lerman-Neubauer Junior Teaching Fellowship
NIH Neuroscience and Neuroengineering Fellowship
Innovative Funding Strategy Award
Bioinformatics research at Simons Center for Systems Biology in Princeton, NJ
Bank of America Mathematics Award
President's Gold Educational Excellence Award
California Scholarship Federation Gold Seal
Advanced Placement Scholar with Distinction
In just three layers of cells, the retina encodes the visual world
into a binary code of action potentials that conveys information about
motion, object edges, direction, and even predictions about what will
happen next in the world. In
this paper we use convolutional neural networks to create the most
accurate model to-date of retinal responses to spatiotemporally varying
binary white noise, and provide a foundation for predicting retinal responses
to natural scenes. We also investigated how well convolutional neural
networks in general can recover simple, sparse models on high dimensional
Lane McIntosh, Niru Maheswaranathan
Top 10% Poster Award, CS231n Convolutional Neural Networks, 2015
In Preparation, 2015
Retinal ganglion cells, the bottleneck of all visual information to the brain, have linear
response properties that appear to maximize information between the visual world and
the retinal ganglion cell responses, subject to a variance constraint. In this paper I contribute
a new theoretical finding that generating the ganglion cells' linear receptive field from
inhibitory interneurons with disparate spatial scales provides a basis that allows the receptive
field to maximize information under a wide range of environments whose signal-to-noise
ratios vary by orders of magnitude.
Mihai Manu, Lane McIntosh, David Kastner, Benjamin Naecker, and Stephen Baccus
In Preparation, 2015
How can we automatically extract events from video? We used a database of surveillance videos and examined the performance of SVMs and Convolutional Neural Networks in detecting events like people getting in and out of cars.
Ian Ballard* and Lane McIntosh*
CS221 Artificial Intelligence Poster, 2014
How should an intelligent system intent on only keeping information predictive about the future filter its data? We analytically find the optimal predictive filter for Gaussian input using recent theorems from the information bottleneck literature. Using numerical methods, we then show the resemblance of these optimally predictive filters to the receptive fields in early visual pathways of vertebrates.
CS229 Machine Learning Poster, 2013
Recent theorems in nonequlibrium thermodynamics show that information processing inefficiency provides a lower bound for energy dissipation in certain systems. We extend these results to model neurons and find that adapting neurons that match the timescale of their inputs perform predictive inference while minimizing energy inefficiency.
Lane McIntosh and Susanne Still
Master's Thesis, 2012
Stanford University, Spring 2015. Co-taught this class with fellow graduate student Kiah Hardcastle, and covered a wide variety of useful mathematical tools including dimensionality reduction, Fourier transforms, dynamical systems, statistics, information theory, and Bayesian probability. Mostly graduate student and postdoctoral audience.
Stanford University, Fall 2014. Teaching assistant for this introductory undergraduate course surveying the literature on perception from the retina to high-level cortex and behavioral experiments.
University of Hawaii, 2010-12. First a teaching assistant, then lecturer, for this large undergraduate introductory mathematics course.
University of Chicago, Spring 2008. Teaching assistant for the third course in the advanced-track biology sequence for students who scored 5/5 on their AP Biology test. This course focused on how to read original research papers in biophysics and chemical biology, with weekly presentations.