CoreWeave · 4 hours ago
Technical Program Manager – Inference
CoreWeave is The Essential Cloud for AI™, providing a platform for innovators to build and scale AI. The Technical Program Manager will focus on distributed inference, model onboarding, and runtime optimization to ensure efficient and reliable AI model serving across the GPU-accelerated cloud.
Artificial Intelligence (AI)Cloud ComputingCloud InfrastructureInformation TechnologyMachine Learning
Responsibilities
Drive end-to-end program management for distributed inference platform initiatives, including scaling, reliability, and performance optimization
Lead cross-functional programs for model onboarding — defining standardized workflows for integrating, validating, and deploying customer and foundation models on CoreWeave
Partner with engineering and product to define and deliver roadmap outcomes for serving throughput, latency, and cost efficiency
Coordinate multi-team execution across AI runtime, GPU infrastructure, and DevOps to deliver high-availability inference services
Build and operationalize success metrics, dashboards, and launch gates to measure reliability, efficiency, and quality across the inference stack
Collaborate with Product, Engineering, and Marketing on GTM readiness — ensuring technical enablement, documentation, and launch processes align with customer adoption and commercial rollout timelines
Establish repeatable processes for rollout management, performance regression tracking, and postmortem analysis
Create and lead operating cadences (program reviews, launch readiness, and retrospectives) to improve visibility and execution velocity
Collaborate with other TPMs and technical leads across CoreWeave to ensure platform-wide alignment on reliability, scalability, and release practices
Qualification
Required
Bachelor's degree in a technical field or equivalent experience
8+ years of program management experience in distributed systems, cloud infrastructure, or AI/ML platform engineering
Proven experience driving large-scale serving or infrastructure programs from concept to production
Strong technical fluency in GPU compute, distributed inference systems, container orchestration, and cloud-native architectures
Demonstrated success in driving measurable improvements in performance, reliability, or operational efficiency
Excellent written and verbal communication skills, with the ability to align engineering, product, and leadership stakeholders around shared goals
Preferred
Familiarity with inference-serving frameworks (e.g., vLLM, SGLang)
Experience with model onboarding workflows, rollout strategies, and observability tooling (Prometheus, Grafana)
Understanding of ML workloads, GPU resource management, and performance optimization techniques
Experience defining or maturing technical program structures across multiple teams in a high-growth or startup environment
Benefits
Medical, dental, and vision insurance - 100% paid for by CoreWeave
Company-paid Life Insurance
Voluntary supplemental life insurance
Short and long-term disability insurance
Flexible Spending Account
Health Savings Account
Tuition Reimbursement
Ability to Participate in Employee Stock Purchase Program (ESPP)
Mental Wellness Benefits through Spring Health
Family-Forming support provided by Carrot
Paid Parental Leave
Flexible, full-service childcare support with Kinside
401(k) with a generous employer match
Flexible PTO
Catered lunch each day in our office and data center locations
A casual work environment
A work culture focused on innovative disruption
Company
CoreWeave
CoreWeave is a cloud-based AI infrastructure company offering GPU cloud services to simplify AI and machine learning workloads.
Funding
Current Stage
Public CompanyTotal Funding
$22.33BKey Investors
JP Morgan ChaseJane Street CapitalStack Capital
2025-11-12Post Ipo Debt· $2.5B
2025-08-20Post Ipo Secondary
2025-07-31Post Ipo Debt· $2.6B
Recent News
The Motley Fool
2025-12-30
2025-12-30
Company data provided by crunchbase