Principal AI Security & Risk Researcher jobs in United States
cer-icon
Apply on Employer Site
company-logo

Ciph Lab · 2 days ago

Principal AI Security & Risk Researcher

Ciph Lab is an emerging AI governance company focused on operationalizing responsible AI governance at scale. They are seeking a Principal AI Security & Risk Researcher to lead their security track, designing continuous security monitoring systems and frameworks to help enterprises assess and mitigate AI risks.

Computer Software

Responsibilities

Research emerging AI attack vectors, guardrail bypasses, and defense mechanisms
Monitor threat intelligence feeds and security research communities
Experiment with new AI security tools and assessment methodologies
Stay current with LLM vulnerabilities, adversarial techniques, and model safety
Design security assessment frameworks for generative AI and agentic systems
Develop risk evaluation methodologies that adapt as threats evolve
Create audit telemetry and security monitoring protocols
Translate security research into operational frameworks that enterprises can deploy
Collaborate with the technical team to build automated security testing tools
Design continuous threat monitoring and alerting systems
Create security validation processes for framework updates
Ensure monitoring systems themselves are secure (meta-security)
Build audit trails for compliance documentation
Contribute to Ciph Lab's weekly newsletter on AI security and risk
Position the company as a trusted voice in AI security governance
Share insights publicly (while protecting proprietary methods)

Qualification

AI/ML securitySecurity frameworksVulnerability assessmentLLM architecturesRisk evaluation methodologiesAI governance frameworksResearch capabilitiesSelf-directedSystems thinkerContinuous learnerCollaborative

Required

5+ years in cybersecurity, with 2+ years focused on AI/ML security, red teaming, or adversarial testing
Deep understanding of LLM architectures, prompt injection, jailbreaking, and model safety mechanisms
Experience developing security testing frameworks or vulnerability assessment tools
Strong research capabilities with ability to translate technical findings into actionable frameworks

Preferred

Experience with AI governance frameworks (NIST AI RMF, ISO 42001, EU AI Act)
Background in enterprise risk assessment or security audit methodologies
Familiarity with agent architectures, RAG systems, or multi-modal AI security
Published work in AI security, adversarial ML, or related fields

Benefits

Equity ownership

Company

Ciph Lab

twitter
company-logo
AI is transforming how work gets done, but organizations are still making decisions with yesterday’s structures.

Funding

Current Stage
Early Stage
Company data provided by crunchbase