Chase · 6 days ago
Distributed Training & Performance Engineer - Vice President
JPMorgan Chase is a leading financial institution offering innovative solutions to clients worldwide. They are seeking a Vice President level engineer to design, optimize, and scale large-model pretraining workloads across hyperscale accelerator clusters, focusing on enhancing training efficiency and performance.
FinanceBankingFinancial Services
Responsibilities
Design and optimize distributed training strategies for large-scale models, including data, tensor, pipeline, context parallelism
Manage end-to-end training performance: from data input pipelines through model execution, communication, and checkpointing
Identify and eliminate performance bottlenecks using systematic profiling and performance modeling
Develop or optimize high-performance kernels using CUDA, Triton, or equivalent frameworks
Design and optimize distributed communication strategies to maximize overlap between computation and inter-node data movement
Design memory-efficient training configurations (caching, optimizer sharding, checkpoint strategies)
Evaluate and optimize training on multiple accelerator platforms, including GPUs and non-GPU accelerators
Contribute towards incorporating performance improvements back to internal pipelines
Qualification
Required
Master's degree with 3+ years of industry experiences, or Ph.D. degree with 1+ years of industry experience in computer science, physics, math, engineering or related fields
Engineering experience at top AI labs, HPC centers, chip vendors, or hyperscale ML infra teams
Strong experience designing and operating large-scale distributed training jobs across multinode accelerator clusters
Deep understanding of distributed parallelism strategies: data parallelism, tensor/model parallelism, pipeline parallelism, and memory/optimizer sharding
Proven ability to profile and optimize training performance using industry standard tools such as Nsight, PyTorch profiler, or equivalent
Hands-on experience with GPU programming and kernel optimization
Strong understanding of accelerator memory hierarchies, bandwidth limitations, and compute-communication tradeoffs
Experience with collective communication libraries and patterns (e.g., NCCL-style collectives)
Proficiency in Python for ML systems development and C++ for performance-critical components
Experience with modern ML frameworks such as PyTorch or JAX in large-scale training settings
Preferred
Experience optimizing training workloads on non-GPU accelerators (e.g., TPU, or wafer-scale architectures)
Familiarity with compiler-driven ML systems (e.g., XLA, MLIR, Inductor) and graph-level optimizations
Experience designing custom fused kernels or novel execution strategies for attention or large matrix operations
Strong understanding of scaling laws governing large-model pretraining dynamics and stability considerations
Contributions to open-source ML systems, distributed training frameworks, or performance-critical kernels
Prior experience collaborating directly with hardware vendors or accelerator teams
Benefits
Comprehensive health care coverage
On-site health and wellness centers
A retirement savings plan
Backup childcare
Tuition reimbursement
Mental health support
Financial coaching and more
Company
Chase
Chase provides broad range of financial services. It is a sub-organization of JP Morgan Chase.
Funding
Current Stage
Late StageLeadership Team
Recent News
2026-02-12
Financial Sector Technology
2026-02-05
Company data provided by crunchbase