Anthropic · 20 hours ago
Technical Scaled Abuse Threat Investigator
Anthropic is a public benefit corporation dedicated to creating reliable and interpretable AI systems. The Technical Scaled Abuse Threat Investigator will be responsible for detecting and investigating large-scale misuse of AI systems, combining open-source research with internal data analysis to inform defenses against threat actors.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Detect and investigate large-scale abuse patterns including model distillation, unauthorized API access, account farming, fraud schemes, and scam operations
Develop abuse signals and tracking strategies to proactively identify scaled adversarial activity and coordinated abuse networks
Conduct technical investigations using SQL, Python, and data science methodologies to analyze large datasets and uncover sophisticated abuse patterns
Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting AI systems at scale
Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm
Study trends internally and in the broader ecosystem to anticipate how AI systems could be exploited, generating and publishing reports
Build and maintain relationships with external threat intelligence partners and information sharing communities
Work cross-functionally to build out our threat intelligence program, establishing processes, tools, and best practices
Forecast how abuse actors will leverage advances in AI technology and inform safety-by-design strategies
Qualification
Required
Have strong proficiency in SQL and Python with a data science background
Have experience with large language models and understanding of how AI technology could be exploited at scale
Have subject matter expertise in abusive user behavior detection, fraud patterns, account abuse, or platform integrity
Have experience tracking threat actors across surface, deep, and dark web environments
Can derive insights from large datasets to make key decisions and recommendations
Have experience with threat actor profiling and utilizing threat intelligence frameworks
Have strong project management skills and ability to build processes from the ground up
Possess excellent communication skills to collaborate with cross-functional teams and present to leadership
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience
Preferred
Experience at a major technology platform working on trust and safety, fraud, or abuse investigations
Background in AI safety, machine learning security, or technology abuse investigation
Experience building and scaling threat detection systems or abuse monitoring programs
Experience with financial crime investigation or fraud analytics
Fluency in Mandarin Chinese and/or Russian (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions
Active Top Secret security clearance
Benefits
Equity and benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
Longevity.Technology
2026-01-14
2026-01-14
Company data provided by crunchbase