Anthropic · 5 hours ago
Technical Policy Manager, Cyber Harms
Anthropic is a public benefit corporation focused on creating reliable and beneficial AI systems. They are seeking a Cyber Harms Technical Policy Manager to lead efforts in preventing AI misuse in the cybersecurity domain by applying technical expertise to inform safety systems and policies.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks
Design and oversee execution of capability evaluations ("evals") to assess the cyber-relevant capabilities of new models
Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques
Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms
Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies
Collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers
Analyze safety system performance in traffic, identifying gaps and proposing improvements
Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks
Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces
Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle
Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies
Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety
Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these
Mentor and develop team members, fostering a culture of technical excellence and responsible AI development
Qualification
Required
An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity
5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing
2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders
Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)
Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)
Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks
Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)
Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders
Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases
A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies
Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve
Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines
Preferred
Background in AI/ML systems, particularly experience with large language models
Experience developing ML-based security systems or adversarial ML research
Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)
Published security research, disclosed vulnerabilities, or participated in bug bounty programs
Understanding of Trust & Safety operations and content moderation at scale
Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth
Understanding of dual-use security research concerns and ethical considerations in AI safety
Benefits
Equity and benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
Longevity.Technology
2026-01-14
2026-01-14
Company data provided by crunchbase