AI and ML HPC Cluster Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

NVIDIA · 2 hours ago

AI and ML HPC Cluster Engineer

NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. The AI/ML HPC Cluster Engineer will support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization.

Artificial Intelligence (AI)Consumer ElectronicsGPUHardwareSoftwareVirtual Reality
check
Growth Opportunities
check
H1B Sponsor Likelynote

Responsibilities

Support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization
Directly administer internal research clusters, conduct upgrades, incident response, and reliability improvements
Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions
Maintain heterogeneous AI/ML clusters on-premises and in the cloud
Support our researchers to run their workloads including performance analysis and optimizations
Analyze and optimize cluster efficiency, job fragmentation, and GPU waste to meet internal SLA targets
Support root cause analysis and suggest corrective action. Proactively find and fix issues before they occur
Triage and support postmortems for reliability incidents affecting users or infrastructure
Participate in a shared on-call rotation supported by strong automation, clear paths for responding to critical issues, and well-defined incident workflows

Qualification

Linux administrationAI/HPC job schedulersCluster configuration managementContainer technologiesPython programmingBash scriptingNVIDIA GPUsCUDA ProgrammingAI/ML conceptsInfiniBandDistributed storage systemsAI/HPC workflows

Required

Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience
Minimum 2 years of experience administering multi-node compute infrastructure
Background in managing AI/HPC job schedulers like Slurm, K8s, PBS, RTDA, BCM (formerly known as Bright), or LSF
Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions
Proven understanding of cluster configuration management tools (Ansible, Puppet, Salt, etc.), container technologies (Docker, Singularity, Podman, Shifter, Charliecloud), Python programming, and bash scripting
Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields

Preferred

Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
Experience with AI/ML concepts, algorithms, models, and frameworks (PyTorch, Tensorflow)
Experience with InfiniBand with IBOP and RDMA
Understanding of fast, distributed storage systems such as Lustre and GPFS for AI/HPC workloads
Applied knowledge in AI/HPC workflows that involve MPI

Benefits

Equity
Benefits

Company

NVIDIA is a computing platform company operating at the intersection of graphics, HPC, and AI.

H1B Sponsorship

NVIDIA has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (1877)
2024 (1355)
2023 (976)
2022 (835)
2021 (601)
2020 (529)

Funding

Current Stage
Public Company
Total Funding
$4.09B
Key Investors
ARPA-EARK Investment ManagementSoftBank Vision Fund
2023-05-09Grant· $5M
2022-08-09Post Ipo Equity· $65M
2021-02-18Post Ipo Equity

Leadership Team

leader-logo
Jensen Huang
Founder and CEO
linkedin
leader-logo
Michael Kagan
Chief Technology Officer
linkedin
Company data provided by crunchbase