Research Scientist Intern, PyTorch Framework Performance (PhD) jobs in United States
cer-icon
Apply on Employer Site
company-logo

Meta · 4 hours ago

Research Scientist Intern, PyTorch Framework Performance (PhD)

Meta is a technology company that builds tools to connect people and foster communities. They are seeking a PhD Research Intern to work on enhancing the performance of PyTorch models through innovative strategies in Mixture-of-Experts systems, focusing on training and inference optimization on modern hardware.

Computer Software
check
Comp. & Benefits

Responsibilities

Design and evaluate communication-aware, kernel-aware, and quantization-aware MoE execution strategies, combining ideas such as expert placement, routing, batching, scheduling, and precision selection
Develop and optimize GPU kernels and runtime components for MoE workloads, including fused kernels, grouped GEMMs, memory-efficient forward and backward passes
Explore quantization techniques (e.g., MXFP8, FP8) in the context of MoE, balancing accuracy, performance, and hardware efficiency
Build performance models and benchmarks to analyze compute, memory, communication, and quantization overheads across different sparsity regimes
Run experiments on single-node and multi-node GPU systems
Collaborate with the open-source community to gather feedback and iterate on the project
Contribute to PyTorch (Core, Compile, Distributed) within the scope of the project
Improve PyTorch performance in general

Qualification

GPU kernel optimizationMixture-of-Experts (MoE)Quantization techniquesTransformer architecturesML systems researchDistributed trainingExperiment designTechnical communicationCross-functional collaboration

Required

Currently has, or is in the process of obtaining, a PhD degree in the field of Computer Science or a related STEM field
Deep knowledge of transformer architectures, including attention, feed-forward layers, and Mixture-of-Experts (MoE) models
Strong background in ML systems research, with domain knowledge in MoE efficiency, such as routing, expert parallelism, communication overheads, and kernel-level optimizations
Hands-on experience writing GPU kernels using CUDA and/or cuteDSL
Working knowledge of quantization techniques and their impact on performance and accuracy
Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment

Preferred

Familiarity with distributed training and inference, such as data parallelism and collective communication
Ability to independently design experiments, analyze complex performance tradeoffs, and clearly communicate technical findings in writing and presentations
Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as NeurIPS, MLSys, ASPLOS, PLDI, CGO, PACT, ICML, or similar
Experience working and communicating cross functionally in a team environment

Benefits

Benefits

Company

Meta's mission is to build the future of human connection and the technology that makes it possible.

Funding

Current Stage
Late Stage

Leadership Team

leader-logo
Kathryn Glickman
Director, CEO Communications
linkedin
leader-logo
Christine Lu
CTO Business Engineering NA
linkedin
Company data provided by crunchbase