Pedram Akbarian

Email | GitHub | LinkedIn | Google Scholar

prof_pic.jpg

Email: akbarian@utexas.edu

I’m a PhD student in the Electrical and Computer Engineering Department at the University of Texas at Austin, advised by Prof. Nhat Ho and Prof. Atlas Wang. I’m broadly interested in theoretical and practical aspects of modern machine learning, with a focus on understanding the fundamental principles of designing and training scalable and efficient foundation models.

Currently, I am working on two primary research directions:

  • Efficient Training and Inference for Foundation Models: Focusing on statistical efficiency and training dynamics of Mixture of Experts (MoE) architectures to enhance scalability and performance in large foundation models.

  • Time Series Foundation Models: Exploring the fundamental limits and methodologies for developing scalable and generalizable models for time series analysis, with an emphasis on enhancing numerical reasoning capabilities.

Prior to joining UT Austin, I completed my Bachelor’s degree in Electrical Engineering with a minor in Computer Engineering at the University of Tehran, Iran. See my CV for more details.

selected publications

  1. Preprint
    Sigmoid Self-Attention is Better than Softmax Self-Attention: A Mixture-of-Experts Perspective
    Fanqi Yan, Huy Nguyen, Pedram Akbarian, Nhat Ho, and Alessandro Rinaldo
    arXiv:2502.00281, 2025
    Under review
  2. Preprint
    Quadratic Gating Functions in Mixture of Experts: A Statistical Insight
    Pedram Akbarian*, Huy Nguyen*, Xing Han*, and Nhat Ho
    arXiv:2410.11222, 2024
    Under review
  3. ICLR
    Statistical Advantages of Perturbing Cosine Router in Sparse Mixture of Experts
    Huy Nguyen, Pedram Akbarian*, Trang Pham*, Trang Nguyen, Shujian Zhang, and Nhat Ho
    In International Conference on Learning Representations (ICLR), 2025
  4. ICML
    Improving Computational Complexity in Statistical Models with Local Curvature Information
    Pedram Akbarian*, Tongzheng Ren*, Jiacheng Zhuo, Nhat Ho, and  others
    In International Conference on Machine Learning (ICML), 2024
  5. ICML
    Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?
    Huy Nguyen, Pedram Akbarian, and Nhat Ho
    In International Conference on Machine Learning (ICML), 2024
  6. ICML
    A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts
    Huy Nguyen, Pedram Akbarian, TrungTin Nguyen, and Nhat Ho
    In International Conference on Machine Learning (ICML), 2024
  7. ICLR
    Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts
    Huy Nguyen, Pedram Akbarian, Fanqi Yan, and Nhat Ho
    In International Conference on Learning Representations (ICLR), 2024