Career – Publications – Talks – Software – Service – References
Welcome, I’m a Research Scientist at Carnegie Mellon University. Previously, I was a researcher at Indiana University, and before that, INRIA-Paris in the ERC Deepsea / Gallium group. Before that, I did my PhD with John Reppy at the University of Chicago, and a postdoc with Umut Acar at the Max Planck Institute for Software Systems in Kaiserslautern, Germany.
Contact me by email at me@mike-rainey.site.
This list is organized by research topic. A number of papers appear under multiple topics, as appropriate. For a list without duplicates, see the list of references at the bottom of this page.
Making parallel programs more robust in the face of parallel-specific overheads: (Rainey et al. 2021; Umut A. Acar et al. 2019; Umut A. Acar et al. 2018; Umut A. Acar, Charguéraud, and Rainey 2016; Umut A. Acar, Charguéraud, and Rainey 2015; Umut A. Acar, Charguéraud, and Rainey 2011; Bergstrom et al. 2010; Bergstrom et al. 2012; Rainey 2010)
Design and implementation of algorithms to map computations generated by parallel programs onto multicore machines: (Umut A. Acar, Charguéraud, and Rainey 2015; Umut A. Acar, Charguéraud, and Rainey 2013; Bergstrom et al. 2010; Bergstrom et al. 2012; Fluet et al. 2010; Fluet et al. 2008; Rainey 2010; Fluet, Rainey, and Reppy 2008)
Programming languages to raise the level of abstraction of parallel programs: (Umut A. Acar et al. 2016; Umut A. Acar, Charguéraud, and Rainey 2012; Fluet et al. 2007)
Work-efficient algorithm for fast parallel depth-first search of directed graphs: (Umut A. Acar, Charguéraud, and Rainey 2015)
Compiler optimization to control the layout of parallel-friendly data structures: (Bergstrom et al. 2013)
Data-parallel library supporting fusion optimizations: (Westrick et al. 2022)
A type-safe calculus for writing fast traversal and construction of serialized representation of trees: (Koparkar et al. 2021; Vollmer et al. 2019)
Efficient algorithms and data structures that are amenable to parallel programming: (Charguéraud and Rainey 2017; Umut A. Acar, Charguéraud, and Rainey 2015; Umut A. Acar, Charguéraud, and Rainey 2014; Wise et al. 2005)
Concurrent data structures: (Umut A. Acar, Ben-David, and Rainey 2017)
Engineering the SML/NJ compiler to handle advanced features of foreign-function calls. (Blume, Rainey, and Reppy 2008)
A technique to help understand the causes of poor speedups: (Umut A. Acar, Charguéraud, and Rainey 2017)
Our PLDI’21 paper extends prior work on Heartbeat Scheduling: it addresses the challenge of delivering a suitable heart beat mechanism via hardware interrupts, and it addresses barriers to sequential performance by proposing an assembly language equipped with support for task parallelism. Here, we provide a long version of the paper, complete with an appendix, and supplemental materials, including the artifact evaluation and an executable dynamic semantics in PLT Redex.
Our PLDI’18 paper presents a proof and an experimental evaluation of our Heartbeat Scheduling technique (Umut A. Acar et al. 2018). This project page hosts the Coq source code of our proofs and the C++ prototype and benchmarking scripts we used for our experimental evaluation. This is the place to go if you are interested to check the proofs or to repeat our experiments.
This project features a C++ implementation of the fast DFS-like graph-traversal algorithm from our SC’15 paper (Umut A. Acar, Charguéraud, and Rainey 2015).
This project features a C++ template library which implements ordered, in-memory containers that are based on a new B-tree-like data structure.
PASL is a C++ library that provides algorithms for mapping computations generated by programs with implicit threading to multicore machines.
Manticore is a parallel programming language aimed at general-purpose applications that run on multicore processors.
I worked on the back end of the compiler. My main projects covered code generation for the x86_64 and support for foreign-function calls.
Get the bibtex file used to generate these references.