Technical Scaled Abuse Threat Investigator jobs in United States
cer-icon
Apply on Employer Site
company-logo

Anthropic · 5 hours ago

Technical Scaled Abuse Threat Investigator

Anthropic is a public benefit corporation dedicated to creating reliable, interpretable, and steerable AI systems. They are seeking a Technical Scaled Abuse Threat Investigator to join their Threat Intelligence team, responsible for detecting and investigating large-scale misuse of AI systems, as well as developing strategies to combat adversarial activities.

Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
badNo H1BnoteSecurity Clearance RequirednoteU.S. Citizen Onlynote

Responsibilities

Detect and investigate large-scale abuse patterns including model distillation, unauthorized API access, account farming, fraud schemes, and scam operations
Develop abuse signals and tracking strategies to proactively identify scaled adversarial activity and coordinated abuse networks
Conduct technical investigations using SQL, Python, and data science methodologies to analyze large datasets and uncover sophisticated abuse patterns
Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting AI systems at scale
Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm
Study trends internally and in the broader ecosystem to anticipate how AI systems could be exploited, generating and publishing reports
Build and maintain relationships with external threat intelligence partners and information sharing communities
Work cross-functionally to build out our threat intelligence program, establishing processes, tools, and best practices
Forecast how abuse actors will leverage advances in AI technology and inform safety-by-design strategies

Qualification

SQLPythonData scienceLarge language modelsAbusive user behavior detectionThreat actor profilingProject managementFluency in MandarinFluency in RussianCommunication skills

Required

Strong proficiency in SQL and Python with a data science background
Experience with large language models and understanding of how AI technology could be exploited at scale
Subject matter expertise in abusive user behavior detection, fraud patterns, account abuse, or platform integrity
Experience tracking threat actors across surface, deep, and dark web environments
Can derive insights from large datasets to make key decisions and recommendations
Experience with threat actor profiling and utilizing threat intelligence frameworks
Strong project management skills and ability to build processes from the ground up
Excellent communication skills to collaborate with cross-functional teams and present to leadership
At least a Bachelor's degree in a related field or equivalent experience

Preferred

Experience at a major technology platform working on trust and safety, fraud, or abuse investigations
Background in AI safety, machine learning security, or technology abuse investigation
Experience building and scaling threat detection systems or abuse monitoring programs
Experience with financial crime investigation or fraud analytics
Fluency in Mandarin Chinese and/or Russian (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions
Active Top Secret security clearance

Benefits

Optional equity donation matching
Generous vacation and parental leave
Flexible working hours

Company

Anthropic

twittertwittertwitter
company-logo
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.

Funding

Current Stage
Late Stage
Total Funding
$33.74B
Key Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B

Leadership Team

leader-logo
Dario Amodei
CEO & Co-Founder
linkedin
leader-logo
Daniela Amodei
President and co-founder
linkedin
Company data provided by crunchbase