Anthropic · 3 hours ago
Staff Red Team Engineer, Safeguards
Anthropic is a public benefit corporation focused on creating reliable and safe AI systems. They are seeking a Staff Red Team Engineer to ensure the safety of their AI products by identifying vulnerabilities and potential abuse scenarios before they can be exploited.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Conduct comprehensive adversarial testing across Anthropic’s product surfaces, developing creative attack scenarios that combine multiple exploitation techniques
Research and implement novel testing approaches for emerging capabilities, including agent systems, tool use, and new interaction paradigms
Design and execute 'full kill chain' attacks that emulate real-world threat actors attempting to achieve specific malicious objectives
Build and maintain systematic testing methodologies that evaluate every aspect of our systems
Develop automated testing frameworks to enable continuous assessment at scale
Collaborate with Product, Engineering, and Policy teams to translate findings into concrete improvements
Help establish metrics for measuring detection effectiveness of novel abuse
Qualification
Required
Demonstrated experience in penetration testing, red teaming, or application security
Strong technical skills in web application security, including hands-on expertise with security testing tools (Burp Suite, Metasploit, custom scripting frameworks, etc.)
A track record of discovering novel attack vectors and chaining vulnerabilities in creative ways
A public body of work such as CVEs, blog posts, or disclosed bug bounty reports
Experience with security testing tools and the ability to build custom automation
Adaptability to understand and build engagements around emerging threats outside of your direct area of expertise
Strong written and verbal communication skills, with the ability to explain technical concepts to varied audiences
Proven ability to think like an attacker
We require at least a Bachelor's degree in a related field or equivalent experience
Preferred
Experience with AI/ML security or adversarial machine learning
Experience testing API security and rate limiting systems
Background in testing business logic vulnerabilities and authorization bypass techniques
Background in anti-fraud, trust & safety, or abuse prevention systems
Familiarity with distributed systems and infrastructure security
Understanding of AI safety considerations beyond traditional security
Familiarity with abuse detection mechanisms and the ability to engineer novel bypasses
Benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
A lovely office space in which to collaborate with colleagues
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
2026-01-20
The Next Web
2026-01-20
Company data provided by crunchbase