[Expression of Interest] Research Scientist/Engineer, Alignment Finetuning jobs in United States
cer-icon
Apply on Employer Site
company-logo

Anthropic · 2 weeks ago

[Expression of Interest] Research Scientist/Engineer, Alignment Finetuning

Anthropic is a public benefit corporation dedicated to creating reliable and beneficial AI systems. As a Research Scientist/Engineer on the Alignment Finetuning team, you will lead the development of techniques to train language models that align better with human values, focusing on moral reasoning and improved honesty.

Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
check
H1B Sponsorednote

Responsibilities

Develop and implement novel finetuning techniques using synthetic data generation and advanced training pipelines
Use these to train models to have better alignment properties including honesty, character, and harmlessness
Create and maintain evaluation frameworks to measure alignment properties in models
Collaborate across teams to integrate alignment improvements into production models
Develop processes to help automate and scale the work of the team

Qualification

PythonML model trainingML research implementationAnalytical skillsML metricsLanguage model finetuningSynthetic data generationCollaboration skillsProblem-solving skills

Required

Have an MS/PhD in Computer Science, ML, or related field, or equivalent experience
Possess strong programming skills, especially in Python
Have experience with ML model training and experimentation
Have a track record of implementing ML research
Demonstrate strong analytical skills for interpreting experimental results
Have experience with ML metrics and evaluation frameworks
Excel at turning research ideas into working code
Can identify and resolve practical implementation challenges
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience

Preferred

Experience with language model finetuning
Background in AI alignment research
Published work in ML or alignment
Experience with synthetic data generation
Familiarity with techniques like RLHF, constitutional AI, and reward modeling
Track record of designing and implementing novel training approaches
Experience with model behavior evaluation and improvement

Benefits

Equity
Benefits
Incentive compensation
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours

Company

Anthropic

twittertwittertwitter
company-logo
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.

H1B Sponsorship

Anthropic has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)

Funding

Current Stage
Late Stage
Total Funding
$33.74B
Key Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B

Leadership Team

leader-logo
Dario Amodei
CEO & Co-Founder
linkedin
leader-logo
Daniela Amodei
President and co-founder
linkedin
Company data provided by crunchbase