Security Researcher, Trusted Computing and Cryptography jobs in United States
cer-icon
Apply on Employer Site
company-logo

OpenAI · 3 months ago

Security Researcher, Trusted Computing and Cryptography

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The role involves leading efforts to map and prioritize vulnerabilities in AI systems while driving offensive research and engaging with external partners. Responsibilities include building threat maps, delivering reports on vulnerabilities, and translating technical issues for diverse audiences.

Agentic AIArtificial Intelligence (AI)Foundational AIGenerative AIMachine LearningNatural Language ProcessingSaaS
check
Growth Opportunities
badNo H1BnoteU.S. Citizen Onlynote

Responsibilities

Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
Deliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks
Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs
Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones
Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes.)
Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations

Qualification

Offensive security techniquesThreat modelingAI/ML infrastructureCross-layer vulnerabilitiesCommunication skills

Required

Current security clearance is not mandatory, but being eligible for sponsorship is required
Lead an effort to map, characterize, and prioritize cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components
Drive offensive research, produce technical deliverables, and serve as OpenAI's primary technical counterpart for select external partners (including potential U.S. government stakeholders)
Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
Deliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks
Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs
Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones
Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes)
Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations

Preferred

Have led high-stakes security research programs with external sponsors (e.g., national-security or critical-infrastructure stakeholders)
Have deep experience with cutting edge offensive-security techniques
Are fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end
Operate independently, align diverse teams, and deliver on tight timelines
Communicate clearly and concisely with experts and decision-makers

Company

OpenAI is an AI research and deployment company that develops advanced AI models, including ChatGPT. It is a sub-organization of OpenAI Foundation.

Funding

Current Stage
Growth Stage
Total Funding
$79B
Key Investors
The Walt Disney CompanySoftBankThrive Capital
2025-12-11Corporate Round· $1B
2025-10-02Secondary Market· $6.6B
2025-03-31Series Unknown· $40B

Leadership Team

leader-logo
Sam Altman
CEO & Co-Founder
leader-logo
Greg Brockman
President, Chairman, & Co-Founder
linkedin
Company data provided by crunchbase