Member of Technical Staff - GPU Infrastructure jobs in United States
cer-icon
Apply on Employer Site
company-logo

Prime Intellect · 4 months ago

Member of Technical Staff - GPU Infrastructure

Prime Intellect is enabling the next generation of AI breakthroughs by assisting customers in deploying and optimizing GPU clusters. As a Solutions Architect for GPU Infrastructure, you will transform customer requirements into production-ready systems for training advanced AI models, while also designing optimal GPU cluster architectures and providing technical support.

Artificial Intelligence (AI)Cloud Computing

Responsibilities

Partner with clients to understand workload requirements and design optimal GPU cluster architectures
Create technical proposals and capacity planning for clusters ranging from 100 to 10,000+ GPUs
Develop deployment strategies for LLM training, inference, and HPC workloads
Present architectural recommendations to technical and executive stakeholders
Deploy and configure orchestration systems including SLURM and Kubernetes for distributed workloads
Implement high-performance networking with InfiniBand, RoCE, and NVLink interconnects
Optimize GPU utilization, memory management, and inter-node communication
Configure parallel filesystems (Lustre, BeeGFS, GPFS) for optimal I/O performance
Tune system performance from kernel parameters to CUDA configurations
Serve as primary technical escalation point for customer infrastructure issues
Diagnose and resolve complex problems across the full stack - hardware, drivers, networking, and software
Implement monitoring, alerting, and automated remediation systems
Provide 24/7 on-call support for critical customer deployments
Create runbooks and documentation for customer operations teams

Qualification

GPU cluster architectureSLURMKubernetesNVIDIA GPU architectureInfrastructure automation toolsPythonBashContainer runtime configurationCustomer-facing technical leadershipLinux kernel tuningNetwork topology designPowerCooling requirements

Required

3+ years hands-on experience with GPU clusters and HPC environments
Deep expertise with SLURM and Kubernetes in production GPU settings
Proven experience with InfiniBand configuration and troubleshooting
Strong understanding of NVIDIA GPU architecture, CUDA ecosystem, and driver stack
Experience with infrastructure automation tools (Ansible, Terraform)
Proficiency in Python, Bash, and systems programming
Track record of customer-facing technical leadership
NVIDIA driver installation and troubleshooting (CUDA, Fabric Manager, DCGM)
Container runtime configuration for GPUs (Docker, Containerd, Enroot)
Linux kernel tuning and performance optimization
Network topology design for AI workloads
Power and cooling requirements for high-density GPU deployments

Preferred

Experience with 1000+ GPU deployments
NVIDIA DGX, HGX, or SuperPOD certification
Distributed training frameworks (PyTorch FSDP, DeepSpeed, Megatron-LM)
ML framework optimization and profiling
Experience with AMD MI300 or Intel Gaudi accelerators
Contributions to open-source HPC/AI infrastructure projects

Company

Prime Intellect

twittertwittertwitter
company-logo
Find compute. Train Models. Co-own intelligence.