![]() |
|
I am a postdoc at CMU working on reinforcement learning and game theory with Tuomas Sandholm. I am interested in algorithms that make optimal decisions in the presence of other decision makers. In particular, I work on developing scalable algorithms that have game-theoretic guarantees. Some topics that I am currently working on include:
I received my PhD in computer science from the University of California, Irvine working with Pierre Baldi. During my PhD, I did research scientist internships at Intel Labs and DeepMind. Before that, I received my bachelor's degree in mathematics and economics from Arizona State University in 2017. Please reach out if you are interested in talking! | |
|
Representative Papers
Multi-Agent Reinforcement Learning ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret Mastering the Game of Stratego With Model-Free Multiagent Reinforcement Learning XDO: A Double Oracle Algorithm for Extensive-Form Games Neural Auto-Curricula in Two-Player Zero-Sum Games Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination
Single-Agent Reinforcement Learning Reducing Variance in Temporal-Difference Value Estimation via Ensemble of Deep Networks Proving Theorems Using Incremental Learning and Hindsight Experience Replay Solving the Rubik's Cube With Deep Reinforcement Learning and Search Solving the Rubik's Cube With Approximate Policy Iteration | |
|
Selected Press
MIT Technology Review: A machine has figured out Rubik's Cube all by itself.
| |
|