Anthropic · 2 months ago
Technical Program Manager, Safeguards - Launches
Anthropic is a public benefit corporation on a mission to create reliable, interpretable, and steerable AI systems. The Technical Program Manager for Safeguards will act as a bridge between various teams to ensure the successful deployment of safety measures during model and product launches, managing complex deployments and coordinating cross-functional efforts.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Own end-to-end safeguards readiness for model launches - Drive all safeguards deployment activities from pre-launch planning through post-launch stabilization, ensuring classifier systems are properly configured, tested, and scaled for each new model release
Coordinate complex cross-platform deployments - Manage classifier rollouts across platforms, each with distinct infrastructure constraints, capacity requirements, and deployment timelines
Make real-time technical trade-off decisions - Balance competing constraints around classifier capacity, latency impacts, false positive rates, and robustness requirements during fast-moving launch cycles with aggressive timelines
Drive capacity planning and performance optimization - Partner with Inference teams to model compute requirements, optimize classifier configurations (batch sizes, mesh topologies, chip allocation), and ensure we stay within overhead budgets while meeting safety requirements
Lead launch day war room coordination - Serve as primary safeguards point of contact during launches, coordinating monitoring, troubleshooting classifier issues, and driving go/no-go calls on deployment decisions
Build detailed execution plans and run-of-show - Create comprehensive launch checklists covering classifier configurations, monitoring thresholds, alerting setup, rollback procedures, and cross-team dependencies
Partner with product teams on co-launch coordination - Ensure new product surfaces have appropriate safeguards protections configured, tested, and deployed; whether launching independently or alongside new models
Monitor and respond to false positive patterns - Track flag rates across customer segments, identify problematic patterns, coordinate with research teams on classifier retraining priorities, and manage exemption processes for enterprise customers
Manage transition to business-as-usual operations - Once launches stabilize, coordinate handoff of ongoing monitoring and optimization work to sustaining engineering teams, then move to the next model or product launch as needed
Maintain technical documentation - Document classifier deployment configurations, capacity estimates, performance benchmarks, and lessons learned to build institutional knowledge
Qualification
Required
Have deep technical program management experience in ML/AI systems - Several years coordinating complex deployments involving model inference, distributed systems, and real-time production services at scale
Have experience with AI safety or Trust & Safety - Familiar with concepts like false positive/false negative trade-offs, adversarial robustness, content moderation systems, or similar safety-critical technical systems
Can navigate high-ambiguity technical challenges - Synthesize incomplete information from multiple engineering teams, identify critical path blockers, and drive decisions when faced with competing technical constraints and tight deadlines
Excel at rapid decision-making under pressure - Comfortable making high-stakes trade-off decisions with incomplete information during time-critical launch windows, balancing safety requirements against capacity, latency, and user experience constraints
Are skilled at cross-functional coordination in complex technical environments - Proven track record managing programs spanning research, infrastructure, safety, security, and product teams, navigating competing priorities and driving alignment
Can communicate technical concepts clearly across varying levels of seniority - Equally comfortable explaining classifier handoff latency to product managers, debating capacity trade-offs with infrastructure teams, and briefing executives on launch readiness
Are comfortable with on-call and launch coverage - Willing to provide real-time support during launch windows (including early morning/late evening as needed) and maintain availability for time-sensitive decisions
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience
Benefits
Equity
Benefits
Incentive compensation
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
Lovely office space in which to collaborate with colleagues
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
2026-01-11
Insurance giant Allianz signs Claude Code deal with Anthropic | CIO
2026-01-11
Company data provided by crunchbase