Yilun Kuang

PhD Student in Machine Learning, Center for Data Science, New York University.

prof_pic.jpg

I am Yilun Kuang, a second-year PhD student in Data Science at NYU CDS & NYU CILVR Lab advised by Andrew Gordon Wilson. My research interests includes Large Language Models, Diffusion Models, Self-Supervised Learning, Multimodal Vision-Language Learning, Probabilistic Generative Models, NeuroAI & AI for Science, Generalization Theory, and Numerical Methods.

Prior to starting PhD, I graduated magna cum laude with high honors from NYU with a BA in Mathematics. I was fortunate to work with SueYeon Chung and Eero Simoncelli on manifold geometry/efficient coding inspired self-supervised learning at the Center for Computational Neuroscience of Flatiron Institute, Simons Foundation.

Outside of research, I enjoy playing ping pong, ultimate frisbee, basketball, and reading about philosophy, politics, and economics.

Selected Publications

* denotes equal contributions
  1. clonebo.png
    Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences
    Alan Nawzad Amin, Nate Gruver*, Yucen Lily Li*, Yilun Kuang*, Hunter Elliott, Calvin McCarter, Aniruddh Raghu, Peyton Greenside, and Andrew Gordon Wilson
    NeurIPS Workshop on AI for New Drug Modalities (NeurIPS Workshop), 2024 Spotlight
  2. token_bounds.png
    Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models
    Sanae Lotfi*, Yilun Kuang*, Brandon Amos, Micah Goldblum, Marc Finzi, and Andrew Gordon Wilson
    Neural Information Processing Systems (NeurIPS), 2024 Spotlight
    ICML Workshop on Theoretical Foundations of Foundation Models (ICML Workshop), 2024 Best Paper Award
  3. doc_bounds.png
    Non-Vacuous Generalization Bounds for Large Language Models
    Sanae Lotfi*, Marc Finzi*, Yilun Kuang*, Tim G. J. Rudner, Micah Goldblum, and Andrew Gordon Wilson
    International Conference on Machine Learning (ICML), 2024
    NeurIPS Workshop on Self-Supervised Learning & Mathematics of Modern Machine Learning (NeurIPS Workshop), 2023
  4. retinal_waves.png
    Unsupervised Learning on Spontaneous Retinal Activity Leads to Efficient Neural Representation Geometry
    Andrew Ligeralde*, Yilun Kuang*, Thomas Yerxa, Miah N. Pitcher, Marla Feller, and SueYeon Chung
    NeurIPS Workshop on Unifying Representations in Neural Models (NeurIPS Workshop), 2023
  5. mmcr.png
    Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations
    Thomas Yerxa, Yilun Kuang, Eero Simoncelli, and SueYeon Chung
    Neural Information Processing Systems (NeurIPS), 2023
    Computational and Systems Neuroscience (COSYNE), 2023