Pedram Akbarian

Email | GitHub | LinkedIn | Google Scholar

prof_pic.jpg

Email: akbarian@utexas.edu

I’m a PhD student in the Electrical and Computer Engineering Department at the University of Texas at Austin, advised by Prof. Nhat Ho. I’m broadly interested in theoretical and practical aspects of modern machine learning, with a focus on understanding the fundamental principles of designing and training scalable and efficient foundation models.

Currently, I am working on two primary research directions:

  • Efficient Training and Inference for Foundation Models: Focusing on statistical efficiency and training dynamics of Mixture of Experts (MoE) architectures to enhance scalability and performance in large foundation models.

  • Time Series Foundation Models: Exploring the fundamental limits and methodologies for developing scalable and generalizable models for time series analysis, with an emphasis on enhancing numerical reasoning capabilities.

Prior to joining UT Austin, I completed my Bachelor’s degree in Electrical Engineering with a minor in Computer Engineering at the University of Tehran, Iran. See my CV for more details.

selected publications

  1. ICML
    Improving Computational Complexity in Statistical Models with Local Curvature Information
    Pedram Akbarian*, Tongzheng Ren*, Jiacheng Zhuo, Sujay Sanghavi, and Nhat Ho
    In International Conference on Machine Learning (ICML), 2024
  2. ICML
    Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?
    Huy Nguyen, Pedram Akbarian, and Nhat Ho
    In International Conference on Machine Learning (ICML), 2024
  3. ICML
    A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts
    Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, and Nhat Ho
    In International Conference on Machine Learning (ICML), 2024
  4. ICLR
    Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts
    Huy Nguyen, Pedram Akbarian, Fanqi Yan, and Nhat Ho
    In International Conference on Learning Representations (ICLR), 2024