Research Overview
Our vision is to democratize AI by transcending the need for big compute and big data. We design scalable and sample-efficient algorithms that enable users to perform accurate training and inference. We are particularly interested in deploying AI in distributed, heterogeneous settings with communication, computation, and data constraints.
Tools and Approach
We use a diverse set of analytical tools drawn from optimization, applied probability, statistics, queueing theory, and coding theory. Our approach involves the following steps, often traversed in an iterative manner:
- Choosing a model that captures the essential elements of the system, while being simple and tractable.
- Designing algorithms that are provably optimal within the model framework.
- Evaluating the proposed algorithms on the real system, going beyond the assumptions of the model.
Selected Projects and Talk Videos
Communication-Efficient Aggregation Methods for Federated Learning

- FedFisher: Leveraging Fisher Information for One-Shot Federated Learning
- Divyansh Jhunjhunwala, Shiqiang Wang, and Gauri Joshi International Conference on Artificial Intelligence and Statistics (AISTATS), May 2024
- Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi
- International Conference on Learning Representations (ICLR), May 2023
- Selected for a Spotlight presentation (top 25% of accepted papers)
- Divyansh Jhunjhunwala, Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi
- Neural Information Processing Systems (NeurIPS), Dec 2021
Learning-Augmented Routing Methods for Multi-server Queueing Systems

- Efficient Reinforcement Learning for Routing Jobs in Heterogeneous Queueing Systems
- Neharika Jali, Guannan Qu, Weina Wang, and Gauri Joshi International Conference on Artificial Intelligence and Statistics (AISTATS), May 2024
- Tuhinangshu Choudhury, Gauri Joshi, Weina Wang and Sanjay Shakkottai
- ACM MobiHoc, July 2021
Federated Reinforcement Learning

- The Blessing of Heterogeneity in Federated Q-learning: Linear Speedup and Beyond
- Jiin Woo, Gauri Joshi, and Yuejie Chi
- Journal on Machine Learning Research (JMLR), Feb 2025
- shorter version in International Conference on Machine Learning (ICML), July 2023
- Federated Offline Reinforcement Learning: Collaborative Single-Policy Coverage Suffices
- Jiin Woo, Laixi Shi, Gauri Joshi, and Yuejie Chi
- International Conference on Machine Learning (ICML), July 2024
- Federated Reinforcement Learning: Communication-Efficient Algorithms and Convergence Analysis
- Sajad Khodadadian, Pranay Sharma, Gauri Joshi, Siva Theja Maguluri International Conference on Machine Learning (ICML), July 2022
- selected for a long presentation (2.1% of submitted papers)
Erasure-Coding for Non-Linear Inference Computations

- Erasure Coded Neural Network Inference via Fisher Averaging
- Divyansh Jhunjhunwala, Neharika Jali, Gauri Joshi and Shiqiang Wang, International Symposium on Information Theory (ISIT), July 2024
- Ankur Mallick and Gauri Joshi
- Information Theory Workshop (ITW), Nov 2022
- Ankur Mallick, Sophie Smith, Gauri Joshi
- International Symposium on Topics in Coding, Sept 2021
Tackling Computational Heterogeneity in Federated Learning

- Cooperative SGD: A Unified Framework for the Analysis of Local-Update SGD
- Jianyu Wang, Gauri Joshi
- Journal of Machine Learning Research (JMLR), 2021
- Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
- Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor
- Neural Information Processing Systems (NeurIPS), Dec 2020
- Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
- Yae Jee Cho, Jianyu Wang, Gauri Joshi
- preprint
- Local Adaptivity in Federated Learning: Convergence and Consistency
- Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, Gauri Joshi
- preprint
Asynchronous and Communication-Efficient Distributed SGD

- Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD
- Jianyu Wang, Hao Liang, Gauri Joshi
- International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2020
- MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
- Jianyu Wang, Anit Sahu, Gauri Joshi, Soummya Kar
- NeurIPS workshop of Federated Learning for Data Privacy and Confidentiality, Dec 2019
- Distinguished Student Paper Award
- Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
- Jianyu Wang, Gauri Joshi
- SysML Conference, March 2019
- Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
- Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar,
- International Conference on Artificial Intelligence and Statistics (AISTATS), Apr 2018
Correlated Multi-armed Bandits

Multi-armed bandit algorithms have numerous data-driven decision-making applications including advertising, A/B testing and hyper-parameter tuning. Unlike classic works that assume independence of the rewards observed by sampling different arms, we consider correlated rewards. This correlation can be exploited to identify sub-optimal or "non-competitive" arms and avoid exploring them. This allows us to reduce a K armed bandit problem to a C armed bandit problem, where C is the number of "competitive" arms.
- Best-arm Identification in Correlated Multi-armed Bandits
- Samarth Gupta, Gauri Joshi, Osman Yagan
- Journal on Selected Areas of Information Theory (JSAIT) Special Issue on Sequential, Active, and Reinforcement Learning, 2021
- Multi-armed Bandits with Correlated Arms
- Samarth Gupta, Shreyas Chaudhari, Gauri Joshi, Osman Yagan
- IEEE Transactions on Information Theory, May 2021
- A Unified Approach to Translate Classic Bandit Algorithms to the Structured Bandit Setting
- Samarth Gupta, Shreyas Chaudhari, Subhojyoti Mukherjee, Gauri Joshi, Osman Yagan
- Journal on Selected Areas of Information Theory (JSAIT) Special Issue on Estimation and Inference, 2021
- Correlated Multi-armed Bandits with a Latent Random Source
- Samarth Gupta, Gauri Joshi, Osman Yagan
- International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2020
Redundancy in Parallel Computing

In large-scale computing where a job is divided into parallel tasks that are run on different computing, slow or straggling nodes can become the bottleneck. We use redundancy in the form of replication or erasure coding to design robust distributed computing algorithms that can seamless adjust to varying node computation speeds. Applications include distributed matrix computations, data analytics and inference.
- Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-vector Multiplication
- Ankur Mallick, Malhar Chaudhari, Ganesh Palanikumar, Utsav Sheth, Gauri Joshi
- ACM SIGMETRICS, June 2020
- Best Paper Award
- Efficient Redundancy Techniques for Latency Reduction in Cloud Systems
- Gauri Joshi, Emina Soljanin and Gregory Wornell,
- ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2017