Privacy Research Engineer, Safeguards jobs in United States
cer-icon
Apply on Employer Site
company-logo

Anthropic · 2 months ago

Privacy Research Engineer, Safeguards

Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems. They are seeking a Privacy Research Engineer to design and implement privacy-preserving techniques, audit current practices, and set the direction for handling privacy in AI systems.

Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
check
H1B Sponsorednote

Responsibilities

Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process
Develop privacy-first training algorithms and techniques
Develop evaluation and auditing techniques to measure the privacy of training algorithms
Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy
Advocate on behalf of our users to ensure responsible handling of all data

Qualification

Privacy-preserving MLPythonML frameworksLarge language modelsDifferential privacyFast-paced environmentCross-functional leadershipCommunication skills

Required

Experience working on privacy-preserving machine learning
A track record of shipping products and features inside a fast-moving environment
Strong coding skills in Python and familiarity with ML frameworks like PyTorch or JAX
Deep familiarity with large language models, how they work, and how they are trained
Have experience working with privacy-preserving techniques (e.g., differential privacy and how it is different from k-anonymity, l-diversity, and t-closeness)
Experience supporting fast-paced startup engineering teams
Demonstrated success in bringing clarity and ownership to ambiguous technical problems
Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience

Preferred

Have published papers on the topic of privacy-preserving ML at top academic venues
Prior experience training large language models (e.g., collecting training datasets, pre-training models, post-training models via fine-tuning and RL, running evaluations on trained models)
Prior experience developing tooling to support privacy-preserving ML (e.g., differential privacy in TF-Privacy or Opacus)

Benefits

Competitive compensation and benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
A lovely office space in which to collaborate with colleagues

Company

Anthropic

twittertwittertwitter
company-logo
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.

H1B Sponsorship

Anthropic has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)

Funding

Current Stage
Late Stage
Total Funding
$33.74B
Key Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B

Leadership Team

leader-logo
Dario Amodei
CEO & Co-Founder
linkedin
leader-logo
Daniela Amodei
President and co-founder
linkedin
Company data provided by crunchbase