Distributed Training Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

Sciforium · 1 month ago

Distributed Training Engineer

Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. The Distributed Training Engineer will build, optimize, and maintain the software stack for large-scale AI training workloads, ensuring systems are fast, scalable, and efficient.

Artificial Intelligence (AI)

Responsibilities

Maintain, update, and optimize critical ML libraries and frameworks including JAX, PyTorch, CUDA, and ROCm across multiple environments and hardware configurations
Build, maintain, and continuously improve the entire ML software stack from ROCm/CUDA drivers to high-level JAX/PyTorch tooling
Ensure all model implementations are efficiently sharded, partitioned, and configured for large-scale distributed training
Continuously integrate and validate modules for runtime correctness, memory efficiency, and scalability across multi-node GPU/accelerator clusters
Conduct detailed profiling of compilation graphs, training workloads, and runtime execution to optimize performance and eliminate bottlenecks
Troubleshoot complex hardware–software interaction issues, including vLLM compilation failures on ROCm, CUDA memory leaks, distributed runtime failures, and kernel-level inconsistencies
Collaborate with research, infrastructure, and kernel engineering teams to improve system throughput, stability, and developer experience

Qualification

Distributed trainingMachine learning systemsCUDA/ROCmPythonC++JAXPyTorchProfiling toolsMulti-node systemsDebuggingCollaboration

Required

5+ years of industry experience in ML systems, distributed training, or related fields
Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical fields
Strong programming experience in Python, C++, and familiarity with ML tooling and distributed systems
Deep understanding of profiling tools (e.g., Nsight, ROCm Profiler, XLA profiler, TPU tools)
Deep expertise with partitioning configuration on the modern ML frameworks such as PyTorch and JAX
Experience with multi-node distributed training systems and orchestration frameworks (DTensor, GSPMD, etc.)
Hands-on experience maintaining or building ML training stacks involving CUDA, ROCm, NCCL, XLA, or similar technologies

Preferred

Extensive experience with the XLA/JAX stack, including compilation internals and custom lowering paths
Familiarity with distributed serving or large-scale inference frameworks (e.g., vLLM, TensorRT, FasterTransformer)
Background in GPU kernel optimization or accelerator-aware model partitioning
Strong understanding of low-level C++ building blocks used in ML frameworks (e.g., XLA, CUDA kernels, custom ops)

Benefits

Medical, dental, and vision insurance
401k plan
Daily lunch, snacks, and beverages
Flexible time off
Competitive salary and equity

Company

Sciforium

twittertwitter
company-logo
Sciforium builds the next generation of AI models with unprecedented efficiency, privacy, and versatility.

Funding

Current Stage
Early Stage
Total Funding
$15.9M
2025-10-27Seed· $12M
2024-06-01Pre Seed· $3.9M
Company data provided by crunchbase