Research Overview

Our vision is to democratize AI by transcending the need for big compute and big data. We design scalable and sample-efficient algorithms that enable users to perform accurate training and inference. We are particularly interested in deploying AI in distributed, heterogeneous settings with communication, computation, and data constraints.

Tools and Approach

We use a diverse set of analytical tools drawn from optimization, applied probability, statistics, queueing theory, and coding theory. Our approach involves the following steps, often traversed in an iterative manner:

  1. Choosing a model that captures the essential elements of the system, while being simple and tractable.
  2. Designing algorithms that are provably optimal within the model framework.
  3. Evaluating the proposed algorithms on the real system, going beyond the assumptions of the model.

Selected Projects and Talk Videos

Communication-Efficient Aggregation Methods for Federated Learning


Data heterogeneity and communication limitations are two key challenges in federated learning. In this project we design optimization algorithms that improve convergence in the presence of data heterogeneity by only modifying the serve-side aggregation procedure. We also design communication-efficient aggregation methods that can result in a better merged global model than vanilla averaging.



Learning-Augmented Routing Methods for Multi-server Queueing Systems


In multi-server queueing systems, classic job dispatching policies such as join-the-shortest-queue and shortest expected delay assume that the service rates and queue lengths of the servers are known to the dispatcher. We address the problem of job dispatching without the knowledge of service rates and queue lengths, where the dispatcher can only obtain noisy estimates of the service rates by observing job departures. In systems where parameters of the optimal policy such as the thresholds for routing to faster versus slower servers are difficult to find, we use reinforcement learning to learn the optimal policy.

Federated Reinforcement Learning


Federated reinforcement learning is a framework where the task of sampling observations from the environment is usually split across multiple agents. The N agents collaboratively learn a global model, without sharing their individual data and behavior policies. This global model is the unique fixed point of the average of N local operators, corresponding to the N agents. Each agent maintains a local copy of the global model and updates it using locally sampled data. In this project, we design federated versions of RL algorithms such as Q-learning and TD-learning and analyze their sample and communication complexity.


Erasure-Coding for Non-Linear Inference Computations


Erasure-coded computing has been successfully used in cloud systems to reduce tail latency caused by factors such as straggling servers and heterogeneous traffic variations. A majority of cloud computing traffic now consists of inference on neural networks on shared resources where the response time of inference queries is also adversely affected by the same factors. However, current erasure coding techniques are largely focused on linear computations such as matrix-vector and matrix-matrix multiplications and hence do not work for the highly non-linear neural network functions. In this paper, we seek to design codes for such non-linear computations. That is, given two or more neural network models, how to construct a coded model whose output is a linear combination of the outputs of the given neural networks.

Tackling Computational Heterogeneity in Federated Learning


The emerging area of federated learning seeks to achieve this goal by orchestrating distributed model training using a large number of resource-constrained mobile devices that collect data from their environment. Due to limited communication capabilities as well as privacy concerns, the data collected by these devices cannot be sent to the cloud for centralized processing. Instead, the nodes perform local training updates and only send the resulting model to the cloud. In this project, we tackle computational heterogeneity in federated optimization, for example, heterogeneous local updates made by the edge clients and intermittent client availability.




Asynchronous and Communication-Efficient Distributed SGD


In large-scale machine learning, training is performed by running stochastic gradient descent (SGD) in a distributed fashion using multiple worker nodes. Synchronization delays due to straggling workers and communication delays in aggregating updates from worker nodes can significantly increase the run-time of the training job. Our goal is to design a system-aware distributed SGD algorithms that strikes the best trade-off between the training time, and the error in the trained model.




Correlated Multi-armed Bandits



Multi-armed bandit algorithms have numerous data-driven decision-making applications including advertising, A/B testing and hyper-parameter tuning. Unlike classic works that assume independence of the rewards observed by sampling different arms, we consider correlated rewards. This correlation can be exploited to identify sub-optimal or "non-competitive" arms and avoid exploring them. This allows us to reduce a K armed bandit problem to a C armed bandit problem, where C is the number of "competitive" arms.


Redundancy in Parallel Computing



In large-scale computing where a job is divided into parallel tasks that are run on different computing, slow or straggling nodes can become the bottleneck. We use redundancy in the form of replication or erasure coding to design robust distributed computing algorithms that can seamless adjust to varying node computation speeds. Applications include distributed matrix computations, data analytics and inference.