NVIDIA · 7 hours ago
Principal Software Engineer - Inference as a Service
NVIDIA has been transforming computer graphics and accelerated computing for over 25 years, and they are seeking a Principal Software Engineer to join their Software Infrastructure Team. This role involves leading the design and development of a scalable platform for serving AI models, managing GPU resources, and ensuring service stability at a massive scale.
AI InfrastructureArtificial Intelligence (AI)Consumer ElectronicsFoundational AIGPUHardwareSoftwareVirtual Reality
Responsibilities
Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service
Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads
Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services
Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness
Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services
Qualification
Required
15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems
Proven experience with container orchestration technologies like Kubernetes
A deep understanding of system architecture for high-performance, low-latency API services
Experience in designing, implementing, and optimizing systems for GPU resource management
Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry)
Demonstrated experience with deployment strategies and CI/CD pipelines
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment
Preferred
Experience with specialized inference serving frameworks
Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space
Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression
Expertise in building platforms that support a wide variety of AI model architectures
Strong understanding of the full lifecycle of an AI model, from training to deployment and serving
Benefits
Equity
Benefits
Company
NVIDIA
NVIDIA is a computing platform company operating at the intersection of graphics, HPC, and AI.
H1B Sponsorship
NVIDIA has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (1877)
2024 (1355)
2023 (976)
2022 (835)
2021 (601)
2020 (529)
Funding
Current Stage
Public CompanyTotal Funding
$4.09BKey Investors
ARPA-EARK Investment ManagementSoftBank Vision Fund
2023-05-09Grant· $5M
2022-08-09Post Ipo Equity· $65M
2021-02-18Post Ipo Equity
Recent News
2026-02-09
The Motley Fool
2026-02-09
Company data provided by crunchbase