OpenAI · 19 hours ago
Content Integrity Analyst
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They are seeking a Content Integrity Analyst to investigate complex cases, apply usage policy, and build scalable systems that reduce risk over time.
Agentic AIArtificial Intelligence (AI)Foundational AIGenerative AIMachine LearningNatural Language ProcessingSaaS
Responsibilities
Apply usage policy with rigor and nuance: Interpret and apply OpenAI’s usage policies to complex, novel scenarios; provide clear guidance to customers and internal teams; document edge cases and propose policy refinements
Mitigate material harm and catastrophic risks: Triage, assess, and support actions on content and behavior that can drive real-world harm, including high-severity domains; escalate appropriately and help drive cases to resolution
Serve as an escalation SME for high-stakes cases: Support incident response and executive-visible escalations by producing clear assessments, recommending next steps, and coordinating with Legal, Compliance, Security, Product, and Engineering as needed
Build scalable trust workflows: Design and operate processes for human-in-the-loop labeling, content/user reporting, appeals, enforcement actions, and continuous QA -- with a high bar for quality and consistency
Drive automation and operational efficiency: Identify repeatable patterns, translate them into requirements, and partner with Engineering and Data teams to ship tooling and automation (including LLM-enabled automation) that improves speed, accuracy, and coverage
Analyze trends and strengthen feedback loops: Use quantitative and qualitative analysis to surface emerging abuse patterns, measure policy and tooling performance, and feed insights back into detection systems, product mitigations, and policy updates
Raise the quality bar: Define and monitor KPIs, build calibration and QA programs, iterate on reviewer training, and improve guidelines and tooling based on error analysis
Enable internal and external teams: Create playbooks, SOPs, and training that help partner teams understand our enforcement posture, risk thresholds, and operational philosophy
Qualification
Required
5+ years in Trust & Safety, integrity, risk, policy enforcement
Strong judgment under ambiguity
Ability to assess risk, spot trends, and use data to prioritize problems and evaluate solutions
Experience shipping operational efficiencies through tooling, process redesign, and automation
Ability to translate nuanced operational reality to engineers and policy stakeholders
Ability to learn quickly, share context generously, and optimize for team outcomes
Preferred
Experience working with vendors is a plus
Data fluency is a plus
Experience with high-severity safety domains (for example: CBRN, cyber abuse)
Experience building QA programs, calibration loops, and measurable reviewer performance systems
Hands-on experience writing requirements for internal tools, piloting automation, or partnering closely with Engineering on safety systems
Company
OpenAI
OpenAI is an AI research and deployment company that develops advanced AI models, including ChatGPT. It is a sub-organization of OpenAI Foundation.
H1B Sponsorship
OpenAI has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (1)
2024 (1)
2023 (1)
2022 (18)
2021 (10)
2020 (6)
Funding
Current Stage
Growth StageTotal Funding
$79BKey Investors
The Walt Disney CompanySoftBankThrive Capital
2025-12-11Corporate Round· $1B
2025-10-02Secondary Market· $6.6B
2025-03-31Series Unknown· $40B
Recent News
2026-01-14
Company data provided by crunchbase