Anthropic · 4 months ago
[Expression of Interest] Research Scientist/Engineer, Honesty
Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems. They are seeking a Research Scientist/Engineer to develop techniques that minimize hallucinations and enhance truthfulness in language models, ensuring high standards of accuracy and honesty across diverse domains.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge
Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model
Create and maintain comprehensive honesty benchmarks and evaluation frameworks
Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems
Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses
Design and implement prompting pipelines to generate data that improves model accuracy and honesty
Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims
Create tools to help human evaluators efficiently assess model outputs for accuracy
Qualification
Required
Have an MS/PhD in Computer Science, ML, or related field
Possess strong programming skills in Python
Have industry experience with language model finetuning and classifier training
Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy
Care about AI safety and the accuracy and honesty of both current and future AI systems
Have experience in data science or the creation and curation of datasets for finetuning LLMs
An understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs
We require at least a Bachelor's degree in a related field or equivalent experience
Preferred
Published work on hallucination prevention, factual grounding, or knowledge integration in language models
Experience with fact-grounding techniques
Background in developing confidence estimation or calibration methods for ML models
A track record of creating and maintaining factual knowledge bases
Familiarity with RLHF specifically applied to improving model truthfulness
Worked with crowd-sourcing platforms and human feedback collection systems
Experience developing evaluations of model accuracy or hallucinations
Benefits
Competitive compensation and benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
A lovely office space in which to collaborate with colleagues
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
2026-01-19
Company data provided by crunchbase