Papers

Preprints

  • Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization [arXiv]
    S. Cen, Y. Wei, and Y. Chi, preprint. Short version at NeurIPS 2021.

  • Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence [arXiv]
    W. Zhan*, S. Cen*, B. Huang, Y. Chen, J. D. Lee, and Y. Chi, preprint. Short version at NeurIPS 2021 Workshop on Optimization for Machine Learning as an oral presentation. (*=equal contribution)

Conference Proceedings

  • Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction [arXiv] [Code]
    B. Li, S. Cen, Y. Chen, and Y. Chi, International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.

Journals

  • Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization [arXiv][PDF] [Code]
    S. Cen, C. Cheng, Y. Chen, Y. Wei, Y. Chi, Operations Research, accepted.

    • 2021 INFORMS George Nicholson Student Paper Competition Finalist

  • Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction [arXiv] [Code]
    B. Li, S. Cen, Y. Chen, and Y. Chi, Journal of Machine Learning Research, vol. 21, no. 180, pp. 1-51, 2020.

  • Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data [arXiv]
    S. Cen. Zhang, Y. Chi, W. Chen and T.-Y. Liu, IEEE Trans. on Signal Processing, vol. 68, pp. 3976-3989, 2020.

  • A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization [arXiv]
    A. Milzarek, X. Xiao, S. Cen, Z. Wen, M. Ulbrich, SIAM Journal on Optimization.