SIGN IN
Sr. Inference ML Runtime Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

Cerebras · 1 week ago

Sr. Inference ML Runtime Engineer

Cerebras Systems builds the world's largest AI chip, revolutionizing AI compute power. The role involves designing and implementing APIs and ML features for generative AI models, collaborating with cross-functional teams to enhance performance and usability.
AI InfrastructureArtificial Intelligence (AI)ComputerHardwareRISCSemiconductorSoftware
check
Growth Opportunities

Responsibilities

Drive and provide technical guidance to a team of software engineers working on complex machine learning integration projects
Design and implement ML features (e.g., structured outputs, biased sampling, predicted outputs) that improve performance of generative AI models at inference time
Design and implement high-throughput, low-latency multimodal inference models that support delivery of image, audio, and video inputs and outputs
Maintain our scalable serving backend for handling many concurrent requests per minute
Scale our inference service by implementing detailed observability throughout the entire stack
Analyze and improve latency, throughput, memory usage, and compute efficiency on the service and the implementation of various features
Optimize software to accelerate generative LLM inference by achieving high throughput and low latency
Stay up-to-date with advancements in machine learning and deep learning, and apply state-of-the-art techniques to enhance our solutions
Evaluate trade-offs between different approaches, clearly articulate design choices, and develop detailed proposals for implementing new features
Uncover, scope, and prioritize significant areas of technical debt across the software stack to ensure continued high quality of the inference service
Build and maintain robust automated test suites to ensure software quality, performance, and reliability
Contribute to an agile team environment by delivering high-quality software and adhering to agile development practices
Lead cross-functional initiative across the company to deliver high-quality inference solutions

Qualification

PythonC++Deep LearningLarge-scale Inference SystemsML FrameworksSoftware Architectural PatternsProblem-solvingCommunication SkillsCollaboration

Required

Bachelor's, Master's, or PhD in Computer Science, Computer Engineering, Mathematics, or a related field
8+ years of experience in large-scale software engineering, with a focus on deep learning or related domains
Proficiency in Python for building and maintaining scalable systems
Advanced proficiency in C++, with an emphasis on multi-threaded programming, performance optimization, and system-level development
Demonstrated experience driving cross-functional projects
Experience building and scaling large-scale inference systems for LLMs or multimodal models
Familiarity with LLM serving frameworks, such as vLLM, SGLang, and TensorRT-LLM
Solid understanding of software architectural patterns for large-scale, high-performance applications
Hands-on experience with ML frameworks, such as PyTorch, and a strong understanding of their underlying architectures
Strong problem-solving skills, with the ability to balance technical depth with practical implementation constraints
Exceptional communication and presentation skills, with the ability to work both independently and collaboratively across multidisciplinary teams

Company

Cerebras

twittertwittertwitter
company-logo
Cerebras Systems is the world's fastest AI inference. We are powering the future of generative AI.

Funding

Current Stage
Late Stage
Total Funding
$2.82B
Key Investors
Tiger Global ManagementAtreides Management,FidelityAlpha Wave Ventures
2026-02-04Series H· $1B
2025-12-03Secondary Market
2025-09-30Series G· $1.1B

Leadership Team

leader-logo
Andrew Feldman
Founder and CEO
linkedin
leader-logo
Bob Komin
Chief Financial Officer
linkedin
Company data provided by crunchbase