Partnership on AI · 5 hours ago
AI Safety Research Scientist
Maximize your interview chances
Artificial Intelligence (AI)Machine Learning
No H1B
Insider Connection @Partnership on AI
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Lead research that connects technical analysis with policy needs, identifying technical challenges underlying AI/AGI safety discussions
Propose governance interventions that could span different layers - from model safety to supply chain considerations to broader societal resilience measures
Use a multistakeholder organizations’ tools - rigorous analysis, public and private communications, working groups, and convenings - to gather insights from Partners on AI development processes to ensure research outputs are practical and impactful
Author/co-author research papers, blogs, and op-eds with PAI staff and Partners, and share insights at leading AI conferences like NeurIPS, FAccT, and AIES
Lead technical research workstreams with high autonomy, supporting the broader AI safety program strategy
Build and maintain strong relationships across PAI's internal teams and Partner community to advance research objectives
Represent PAI in key external forums, including technical working groups and research collaborations
Translate complex technical findings into clear, actionable recommendations for AI safety institutes, policymakers, industry partners, and the public
Support development of outreach strategies to increase adoption of PAI's AI safety recommendations
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
PhD or MA with three or more years of research or practical experience in a relevant field (e.g., computer science, machine learning, economics, science and technology studies, philosophy)
Strong understanding of technical AI landscape and governance challenges, including safety considerations for advanced AI systems
Demonstrated ability to conduct rigorous technical governance research while considering broader policy and societal implications
Excellent communication skills, with proven ability to translate complex technical concepts for different audiences
Track record of building collaborative relationships and working effectively across diverse stakeholder groups
Adaptable and comfortable working in a dynamic, mission-driven organization
Preferred
Experience at frontier AI labs or tech companies (AI safety experience not required; we welcome those with ML, product, policy or engineering backgrounds) or government agencies working on AI-related areas
Subject matter expertise from relevant areas such as: AI system Trust & Safety (e.g., developing monitoring systems, acceptable use policies, or safety metrics for large language models), Privacy-preserving machine learning and differential privacy, Cybersecurity, particularly vulnerability assessment and incident reporting
Benefits
Twenty vacation days
Three personal reflection days
Sick leave and family leave above industry standards
High-quality PPO and HMO health insurance plans, many 100% covered by PAI
Dental and vision insurance 100% covered by PAI
Up to a 7% 401K match, vested immediately
Pre-tax commuter benefits (Clipper via TriNet)
Automatic cell phone reimbursement ($75/month)
Up to $1,000 in professional development funds annually
$150 per month to access co-working space
Regular team lunches & focused work days
Opportunities to attend AI related conferences and events and to collaborate with our roughly 100 partners across industry, academia and civil society
Company
Partnership on AI
Partnership on AI has been established to study and formulate best practices on Artificial Intelligence technologies.
Funding
Current Stage
Early StageTotal Funding
$0.6MKey Investors
Knight Foundation
2021-05-12Grant· $0.6M
Leadership Team
Recent News
Company data provided by crunchbase