Forward Deployed Engineer, AI Inference (vLLM and Kubernetes) jobs in United States
cer-icon
Apply on Employer Site
company-logo

Red Hat · 1 day ago

Forward Deployed Engineer, AI Inference (vLLM and Kubernetes)

Red Hat is the world’s leading provider of enterprise open source software solutions, and they are seeking a Forward Deployed Engineer to join their vLLM and LLM-D Engineering team. In this role, you will deploy, optimize, and scale distributed Large Language Model inference systems, working closely with customer engineers to integrate their existing environments with Red Hat's cutting-edge inference platform.

Enterprise SoftwareInsurTechLinuxOpen SourceOperating SystemsSoftware
check
Culture & Values
check
H1B Sponsor Likelynote

Responsibilities

Orchestrate Distributed Inference: Deploy and configure LLM-D and vLLM on Kubernetes clusters. You will set up and configure advanced deployment like disaggregated serving, KV-cache aware routing, KV Cache offloading etc to maximize hardware utilization
Optimize for Production: Go beyond standard deployments by running performance benchmarks, tuning vLLM parameters, and configuring intelligent inference routing policies to meet SLOs for latency and throughput. You care about Time Per Output Token (TPOT), GPU utilization, GPU networking optimizations, and Kubernetes scheduler efficiency
Code Side-by-Side: Work directly with customer engineers to write production-quality code (Python/Go/YAML) that integrates our inference engine into their existing Kubernetes ecosystem
Solve the "Unsolvable": Debug complex interaction effects between specific model architectures (e.g., MoE, large context windows), hardware accelerators (NVIDIA GPUs, AMD GPUs, TPUs), and Kubernetes networking (Envoy/ISTIO)
Feedback Loop: Act as the "Customer Zero" for our core engineering teams. You will channel field learnings back to product development, influencing the roadmap for LLM-D and vLLM features
Travel only as needed to customers to present, demo, or help execute proof-of-concepts

Qualification

Kubernetes expertiseAI inference proficiencySystems programmingInfrastructure as CodeCloud & GPU hardwareCustomer fluencyBias for action

Required

8+ Years of Engineering Experience: You have a decade-long track record in Backend Systems, SRE, or Infrastructure Engineering
Customer Fluency: You speak both 'Systems Engineering' and 'Business Value'
Bias for Action: You prefer rapid prototyping and iteration over theoretical perfection. You are comfortable operating in ambiguity and taking ownership of the outcome
Deep Kubernetes Expertise: You are fluent in K8s primitives, from defining custom resources (CRDs, Operators, Controllers) to configuring modern ingress via the Gateway API. You have deep experience with stateful workloads and high-performance networking, including the ability to tune scheduler logic (affinity/tolerations) for GPU workloads and troubleshoot complex CNI failures
AI Inference Proficiency: You understand how a LLM forward pass works. You know what KV Caching is, why prefill/decode disaggregation matters, why context length impacts performance, and how continuous batching works in vLLM
Systems Programming: Proficiency in Python (for model interfaces) and Go (for Kubernetes controllers/scheduler logic)
Infrastructure as Code: Experience with Helm, Terraform, or similar tools for reproducible deployments
Cloud & GPU Hardware Fluency: You are comfortable spinning up clusters and deploying LLMs on bare-metal and hyperscaler Kubernetes clusters

Preferred

Experience contributing to open-source AI infrastructure projects (e.g., KServe, vLLM, Kubernetes)
Knowledge of Envoy Proxy or Inference Gateway (IGW)
Familiarity with model optimization techniques like Quantization (AWQ, GPTQ) and Speculative Decoding

Benefits

Comprehensive medical, dental, and vision coverage
Flexible Spending Account - healthcare and dependent care
Health Savings Account - high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!

Company

Red Hat is a software company that offers enterprise open-source software solutions. It is a sub-organization of IBM.

H1B Sponsorship

Red Hat has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (159)
2024 (148)
2023 (156)
2022 (181)
2021 (154)
2020 (106)

Funding

Current Stage
Public Company
Total Funding
unknown
2018-10-28Acquired
1999-08-20IPO
1999-03-09Corporate Round

Leadership Team

leader-logo
Chris Wright
Chief Technology Officer and Senior Vice President Global Engineering
linkedin
leader-logo
Mark Little
CTO JBoss
linkedin
Company data provided by crunchbase