Aiming to Map the Structure of Thought
Our lab’s goal is to develop new mathematical models for neural computation in both tissue and silicon, with the goal of understanding the computation, its efficiencies, and its potential applications. Our modeling spans the gamut from connectomics based analysis of small animal circuits to the combinatorial interpretation of circuits in artificial neural networks. Our interdisciplinary approach strives to build on the interplay, and support the progression, of both neuroscience and machine learning research.
News
- January 22, 2025: Our work Wasserstein Distances, Neuronal Entanglement, and Sparsity has been accepted into ICLR 2025 as a Spotlight Presentation!
- December 23, 2024: Our work Presynaptic input synchrony at scale has been accepted into COSYNE 2025!
- December 3, 2024: Our work Jailbreak Defense in a Narrow Domain: Limitations of Existing Methods and a New Transcript-Classifier Approach has been accepted into NeurIPS 2024 Workshop on New Frontiers in Adversarial Machine Learning and Workshop on Socially Responsible Language Modelling Research!
- October 16, 2024: Our work Structure Matters: Deciphering Neural Network's Properties from its Structure has been accepted into NeurIPS 2024 Workshop on Symmetry and Geometry in Neural Representations!
- September 5, 2024: Introducing a new manuscript: On the Complexity of Neural Computation in Superposition.
- May 29, 2024: Introducing two new manuscripts: A connectomics-driven analysis reveals novel characterization of border regions in mouse visual cortex and Sparse Expansion and Neuronal Disentanglement.
- May 28, 2024: We celebrated the end of the year with a delicious barbecue dinner!
