Machine Learning Engineer - Model Evaluations, Public Sector jobs in United States
cer-icon
Apply on Employer Site
company-logo

Scale AI · 5 hours ago

Machine Learning Engineer - Model Evaluations, Public Sector

Scale AI is a company focused on developing reliable AI systems for critical decision-making in the public sector. The role of Machine Learning Engineer involves designing, implementing, and scaling automated evaluation pipelines for advanced AI systems used in defense, intelligence, and federal missions.

Artificial Intelligence (AI)Data Collection and LabelingGenerative AIImage RecognitionMachine Learning
badNo H1BnoteSecurity Clearance RequirednoteU.S. Citizen Onlynote

Responsibilities

Develop and maintain automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations
Design test datasets and benchmarks to measure generalization, bias, explainability, and failure modes
Build evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing
Conduct comparative analyses of model architectures, training procedures, and evaluation outcomes
Implement tools for continuous monitoring, regression testing, and quality assurance for ML systems
Design and execute stress tests and red-teaming workflows to uncover vulnerabilities and edge cases
Collaborate with operations teams and subject matter experts to produce high-quality evaluation datasets

Qualification

PythonTensorFlowPyTorchMachine LearningComputer VisionDeep LearningReinforcement LearningNLPCloud ExperienceAlgorithmsData StructuresObject-Oriented ProgrammingAI SafetyInterpretabilityAdversarial Robustness

Required

Develop and maintain automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations
Design test datasets and benchmarks to measure generalization, bias, explainability, and failure modes
Build evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing
Conduct comparative analyses of model architectures, training procedures, and evaluation outcomes
Implement tools for continuous monitoring, regression testing, and quality assurance for ML systems
Design and execute stress tests and red-teaming workflows to uncover vulnerabilities and edge cases
Collaborate with operations teams and subject matter experts to produce high-quality evaluation datasets
This role will require an active security clearance or the ability to obtain a security clearance
Experience in computer vision, deep learning, reinforcement learning, or NLP in production settings
Strong programming skills in Python; experience with TensorFlow or PyTorch
Background in algorithms, data structures, and object-oriented programming
Experience with LLM pipelines, simulation environments, or automated evaluation systems
Ability to convert research insights into measurable evaluation criteria

Preferred

Graduate degree in CS, ML, or AI
Cloud experience (AWS, GCP) and model deployment experience
Experience with LLM evaluation, CV robustness, or RL validation
Knowledge of interpretability, adversarial robustness, or AI safety frameworks
Familiarity with ML evaluation frameworks and agentic model design
Experience in regulated, classified, or mission-critical ML domains

Benefits

Comprehensive health, dental and vision coverage
Retirement benefits
A learning and development stipend
Generous PTO
A commuter stipend

Company

Scale AI

twittertwittertwitter
company-logo
Scale’s mission is to develop reliable AI systems for the world’s most important decisions.

Funding

Current Stage
Late Stage
Total Funding
$15.9B
Key Investors
MetaAccelTiger Global Management
2025-06-10Corporate Round· $14.3B
2025-06-04Series Unknown
2024-05-21Series F· $1B

Leadership Team

leader-logo
Jason Droege
Interim Chief Executive Officer
linkedin
leader-logo
Dennis Cinelli
Chief Financial Officer
linkedin
Company data provided by crunchbase