Rui's (handsome) headshot

Rui Pan 潘瑞

CS Ph.D. student @ Princeton

About

I'm a 3rd-year CS Ph.D. student at Princeton University, advised by Prof. Ravi Netravali. I am broadly interested in the intersection of systems, networks, and machine learning. My recent work has focused on systems for efficient ML/LLM inference. I got my B.S. in CS and Math from University of Wisconsin-Madison, where I was fortunate to work with Prof. Shivaram Venkataraman on systems (cluster scheduling & resource allocation) for ML. I have also interned at AWS on systems for efficient LLM inference and at MPI-INF on networked systems for ML training.

Publications

(*Equal contributions)
Preprints🀄️🤞
  • Marconi: Prefix Caching for the Era of Hybrid LLMs
    Rui Pan, Zhuang Wang, Zhen Jia, Can Karakus, Yida Wang, Luca Zancato, Tri Dao, Ravi Netravali
    In submission. Preprint coming soon :)
    Currently under internal review

  • Tarzan: Passively-Learned Real-Time Rate Control for Video Conferencing
    Neil Agarwal, Rui Pan, Francis Y. Yan, Ravi Netravali
    arXiv 2024
    Rate control algorithms are at the heart of video conferencing platforms, determining target bitrates that match dynamic network characteristics for high quality. Recent data-driven strategies have shown promise for this challenging task, but the performance degradation they introduce during training has been a nonstarter for many production services, precluding adoption. This paper aims to bolster the practicality of data-driven rate control by presenting an alternative avenue for experiential learning: leveraging purely existing telemetry logs produced by the incumbent algorithm in production. We observe that these logs often contain effective decisions, although often at the wrong times or in the wrong order. To realize this approach despite the inherent uncertainty that log-based learning brings (i.e., lack of feedback for new decisions), our system, Tarzan, combines a variety of robust learning techniques (i.e., conservatively reasoning about alternate behavior to minimize risk and using a richer model formulation to account for environmental noise). Across diverse networks (emulated and real-world), Tarzan outperforms the widely deployed GCC algorithm, increasing average video bitrates by 15-39% while reducing freeze rates by 60-100%.

  • Optimizing Mixture-of-Experts Inference Latency Combining Model Deployment and Communication Scheduling
    Jialong Li, Shreyansh Tripathi, Lakshay Rastogi, Yiming Lei, Rui Pan, Yiting Xia
    arXiv 2024
    As machine learning models scale in size and complexity, their computational requirements become a significant barrier. Mixture-of-Experts (MoE) models alleviate this issue by selectively activating relevant experts. Despite this, MoE models are hindered by high communication overhead from all-to-all operations, low GPU utilization due to the synchronous communication constraint, and complications from heterogeneous GPU environments. This paper presents Aurora, which optimizes both model deployment and all-to-all communication scheduling to address these challenges in MoE inference. Aurora achieves minimal communication times by strategically ordering token transmissions in all-to-all communications. It improves GPU utilization by colocating experts from different models on the same device, avoiding the limitations of synchronous all-to-all communication. We analyze Aurora's optimization strategies theoretically across four common GPU cluster settings: exclusive vs. colocated models on GPUs, and homogeneous vs. heterogeneous GPUs. Aurora provides optimal solutions for three cases, and for the remaining NP-hard scenario, it offers a polynomial-time sub-optimal solution with only a 1.07x degradation from the optimal. Aurora is the first approach to minimize MoE inference time via optimal model deployment and communication scheduling across various scenarios. Evaluations demonstrate that Aurora significantly accelerates inference, achieving speedups of up to 2.38x in homogeneous clusters and 3.54x in heterogeneous environments. Moreover, Aurora enhances GPU utilization by up to 1.5x compared to existing methods.

Conferences🤓🎙️
  • Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving
    Yinwei Dai*, Rui Pan*, Anand Iyer, Kai Li, Ravi Netravali
    ACM SOSP 2024
    Machine learning (ML) inference platforms are tasked with balancing two competing goals: ensuring high throughput given many requests, and delivering low-latency responses to support interactive applications. Unfortunately, existing platform knobs (e.g., batch sizes) fail to ease this fundamental tension, and instead only enable users to harshly trade off one property for the other. This paper explores an alternate strategy to taming throughput-latency tradeoffs by changing the granularity at which inference is performed. We present Apparate, a system that automatically applies and manages early exits (EEs) in ML models, whereby certain inputs can exit with results at intermediate layers. To cope with the time-varying overhead and accuracy challenges that EEs bring, Apparate repurposes exits to provide continual feedback that powers several novel runtime monitoring and adaptation strategies. Apparate lowers median response latencies by 40.5-91.5% and 10.0-24.2% for diverse CV and NLP classification workloads, and median time-per-token latencies by 70.4-77.9% for generative scenarios, without affecting throughputs or violating tight accuracy constraints.

  • Improving DNN Inference Throughput Using Practical, Per-Input Compute Adaptation
    Anand Iyer, Mingyu Guan, Yinwei Dai, Rui Pan, Swapnil Gandhi, Ravi Netravali
    ACM SOSP 2024
    Machine learning inference platforms continue to face high request rates and strict latency constraints. Existing solutions largely focus on compressing models to substantially lower compute costs (and time) with mild accuracy degradations. This paper explores an alternate (but complementary) technique that trades off accuracy and resource costs on a per-input granularity: early exit models, which selectively allow certain inputs to exit a model from an intermediate layer. Though intuitive, early exits face fundamental deployment challenges, largely owing to the effects that exiting inputs have on batch size (and resource utilization) throughout model execution. We present E3, the first system that makes early exit models practical for realistic inference deployments. Our key insight is to split and replicate blocks of layers in models in a manner that maintains a constant batch size throughout execution, all the while accounting for resource requirements and communication overheads. Evaluations with NLP and vision models show that E3 can deliver up to 1.74× improvement in goodput (for a fixed cost) or 1.78× reduction in cost (for a fixed goodput). Additionally, E3's goodput wins generalize to autoregressive LLMs (2.8-3.8×) and compressed models (1.67×).

  • Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning
    Pengfei Zheng, Rui Pan, Tarannum Khan, Shivaram Venkataraman, Aditya Akella
    USENIX NSDI 2023
    Dynamic adaptation has become an essential technique in accelerating distributed machine learning (ML) training: Recent studies have shown that dynamically adjusting model structure (e.g., lottery ticket hypothesis) or hyperparameters (e.g., batch size) can significantly accelerate training without sacrificing accuracy. However, existing ML cluster schedulers are not designed to handle dynamic adaptation. We show that existing schemes fail to provide fairness and degrade system efficiency when the training throughput changes over time under dynamic adaptation. We design Shockwave, a scheduler with future planning that builds on two key ideas. First, Shockwave extends classic market theory from static settings to dynamic settings to co-optimize efficiency and fairness. Second, Shockwave utilizes stochastic dynamic programming to handle uncertain, dynamic throughput. We build a system for Shockwave and validate its performance with both trace-driven simulation and cluster experiments. Results show that for traces of ML jobs with dynamic adaptation, Shockwave improves makespan by 1.3× and fairness by 2× when compared with existing fair scheduling schemes.

Workshops🤓👆
  • Efficient Flow Scheduling in Distributed Deep Learning Training with Echelon Formation
    Rui Pan*, Yiming Lei*, Jialong Li, Zhiqiang Xie, Binhang Yuan, Yiting Xia
    ACM HotNets 2022
    This paper discusses why flow scheduling does not apply to distributed deep learning training and presents EchelonFlow, the first network abstraction to bridge the gap. EchelonFlow deviates from the common belief that semantically related flows should finish at the same time. We reached the key observation, after extensive workflow analysis of diverse training paradigms, that distributed training jobs observe strict computation patterns, which may consume data at different times. We devise a generic method to model the drastically different computation patterns across training paradigms, and formulate EchelonFlow to regulate flow finish times accordingly. Case studies of mainstream training paradigms under EchelonFlow demonstrate the expressiveness of the abstraction, and our system sketch suggests the feasibility of an EchelonFlow scheduling system.

Some other non peer-reviewed write-ups include:
  • CS 759 Project Report: Cautiously Aggressive GPU Space Sharing for Improving Resource Utilization and Job Efficiency (pdf)
  • CS 744 Project Report: Comparing Black-Box Optimization Methods for Online DBMS Tuning (pdf)
  • AgDH: A System for Gathering and Disseminating Dairy Data (pdf)

Education

Experience

  • AWS logo
    Applied Scientist Intern @ Amazon, AWS AI Team

    May 2024 - Aug 2024

    Santa Clara, CA

    Manager: Dr. Zhen Jia

  • Max Planck Institute logo
    🇩🇪Research Intern @ Max Planck Institute for Informatics (MPI-INF)

    Feb 2022 - Aug 2022

    Saarbrücken, Germany

    Advisor: Prof. Yiting Xia

Professional Activities

Interests

  • Checking out new places, either in person or on Google Maps. Cities I have lived in for more than a few months include: Shanghai, Pittsburgh, Madison, Berkeley, Saarbrücken, Princeton, and Mountain View. I play a little bit of GeoGuessr!
  • Collecting postcards. I love postcards, send me one or let me know if you want one!
  • Sports. I play soccer and table tennis for fun and I am a fan of FC Barcelona.
  • Watching movies & making pop culture references. One of my favorite Youtube channels is Cinema Therapy.
  • Music. I used to play accordion and alto saxophone because of Chinese parenting.
  • Writing. I have a personal blog that hosts some paper reading notes and other random blog posts. Some of my most-visited writings include:

Contacts

ruipan at cs dot princeton dot edu

@ 2023 Rui Pan. Powered by Bootstrap. Feel free to fork this website's source code, just remember to remove the analytics stuff and add a reference back to my site.