LinkedIn · 1 month ago
Sr. Staff AI Engineer, GenAI Safety
LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. They are seeking a Sr. Staff AI Engineer for their GenAI Safety team to shape the company’s generative AI safety direction and ensure responsible AI product development.
Professional NetworkingRecruitingSocial MediaSocial Recruiting
Responsibilities
Drive GenAI Safety Strategy: Serve as the senior technical leader shaping the company’s generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and system‑level protections
Lead AI Safety Research & Innovation: Guide LinkedIn’s research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, production‑ready solutions
Design Safety‑First Foundations: Provide architectural leadership for scalable safety systems—benchmarking, red‑teaming, content safety, privacy‑preserving training, and real‑time guardrails — ensuring they are reliable, performant, and deeply integrated into AI infrastructure
Deliver High‑Impact Solutions in Ambiguous Spaces: Tackle LinkedIn’s toughest ethical, regulatory, and risk‑driven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed
Liaison With Product Engineering: Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development
Cross‑Functional Leadership: Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure end‑to‑end alignment across disciplines
Technical Mentorship: Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety
Company‑Wide Impact: Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation
Qualification
Required
2+ years as a Technical Lead, Staff Engineer, Principal Engineer, or equivalent
5+ years of industry experience in AI or Machine Learning Engineering
BA/BS Degree in Computer Science or related technical discipline or equivalent practical experience
Preferred
10+ years of industry and/or research experience in AI/ML delivering impact at scale
PhD in CS/AI/ML or related field (or equivalent research/industry achievements)
Expert understanding of Transformers; hands-on experience training, fine‑tuning, distilling/compressing, and deploying LLMs in production
Track record applying LLMs to recommender systems and language agents
Demonstrated leadership in red‑teaming (manual + automated), safety benchmarking/evaluations, content safety/guardrails, prompt‑injection/jailbreak detection, and abuse/misuse prevention
Experience translating Legal/Compliance requirements (e.g., EU AI Act) into technical controls, including harm taxonomies, model cards, and risk assessments
Proven ability to design safety‑first architectures (evaluation pipelines, moderation services, policy engines, incident response & telemetry) for distributed, real‑time ML systems
Strong understanding of RL (e.g., RLHF/RLAIF, offline/online RL) for language‑based agents, including safety‑aware reward design and feedback loops
Advanced Python and PyTorch; familiarity with TensorFlow
Experience with safety evaluation tooling (e.g., platforms akin to LLUME) and safety datasets/benchmarks
Significant contributions via top‑tier publications (NeurIPS, ICLR, ICML, ACL) and/or impactful open‑source or widely used safety tooling
Proven technical leadership mentoring ~15 engineers, setting direction, and elevating execution quality
Effective liaison with Product Engineering (tracking experiments and venture bets; aligning safety research to upcoming bets) and strong collaboration with Legal, Compliance, AI Infra, and Policy
Good to have: Experience with advanced reasoning/planning (e.g., CoT/ToT, self‑reflection, program synthesis, symbolic/neuro‑symbolic methods, search‑augmented reasoning, verification‑aware decoding)
Benefits
Generous health and wellness programs
Time away for employees of all levels
Annual performance bonus
Stock
Benefits and/or other applicable incentive compensation plans
Company
LinkedIn is a professional networking site that allows users to create business connections, search for jobs, and find potential clients. It is a sub-organization of Microsoft.
H1B Sponsorship
LinkedIn has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (892)
2024 (1108)
2023 (913)
2022 (1580)
2021 (1043)
2020 (1146)
Funding
Current Stage
Public CompanyTotal Funding
$154.8MKey Investors
Bain Capital VenturesGreylockSequoia Capital
2016-06-13Acquired
2016-02-15Private Equity
2014-04-01Series Unknown
Recent News
2026-01-22
Entrepreneur.com
2026-01-21
Company data provided by crunchbase