SIGN IN
Principal Software Engineer - Inference as a Service jobs in United States
cer-icon
Apply on Employer Site
company-logo

NVIDIA · 19 hours ago

Principal Software Engineer - Inference as a Service

NVIDIA is a leading company in computer graphics and AI technology, seeking a Principal Software Engineer to join their Software Infrastructure Team. This role involves designing and developing the Inference as a Service platform, managing GPU resources, and ensuring high-performance, low-latency inference at scale.
AI InfrastructureArtificial Intelligence (AI)Consumer ElectronicsFoundational AIGPUHardwareSoftwareVirtual Reality
check
Growth Opportunities
check
H1B Sponsor Likelynote

Responsibilities

Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service
Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads
Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services
Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness
Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services

Qualification

Distributed systemsGPU resource managementPythonKubernetesCI/CD pipelinesObservability toolsProblem-solvingCollaboration

Required

15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems
Proven experience with container orchestration technologies like Kubernetes
A deep understanding of system architecture for high-performance, low-latency API services
Experience in designing, implementing, and optimizing systems for GPU resource management
Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry)
Demonstrated experience with deployment strategies and CI/CD pipelines
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment

Preferred

Experience with specialized inference serving frameworks
Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space
Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression
Expertise in building platforms that support a wide variety of AI model architectures
Strong understanding of the full lifecycle of an AI model, from training to deployment and serving

Benefits

Equity
Benefits

Company

NVIDIA is a computing platform company operating at the intersection of graphics, HPC, and AI.

H1B Sponsorship

NVIDIA has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (1877)
2024 (1355)
2023 (976)
2022 (835)
2021 (601)
2020 (529)

Funding

Current Stage
Public Company
Total Funding
$4.09B
Key Investors
ARPA-EARK Investment ManagementSoftBank Vision Fund
2023-05-09Grant· $5M
2022-08-09Post Ipo Equity· $65M
2021-02-18Post Ipo Equity

Leadership Team

leader-logo
Jensen Huang
Founder and CEO
linkedin
leader-logo
Michael Kagan
Chief Technology Officer
linkedin
Company data provided by crunchbase