10a Labs · 3 months ago
AI Red Teamer
10a Labs is an applied research and AI security company trusted by AI unicorns, Fortune 10 companies, and U.S. tech leaders. In this role, you will develop and run adversarial test suites for LLMs and image/video models, analyze outputs, triage failures, and contribute to internal tooling. The position is ideal for entry-level candidates with an interest in red-teaming and AI safety.
Research
Responsibilities
Develop and run adversarial test suites—both manual and scripted—for LLMs and image / video models
Craft multilingual prompts, jailbreaks, and escalation chains targeting policy edge cases
Analyze outputs, triage failures, and write concise vulnerability reports
Contribute to internal tooling (e.g., prompt libraries, scenario generators, dashboards)
Qualification
Required
Bachelor's degree—or equivalent experience—in CS, data science, linguistics, international studies, or security
Basic proficiency with Python and command-line tools
Demonstrated interest in AI safety, adversarial ML, or abuse detection
Strong writing skills for short vulnerability reports and long-form analyses
Ability to rapidly context switch across domains, modalities, and abuse areas
Excited to work in a fast-paced and ambiguous space
Entry level, ideally with coursework or work experience in red-teaming, security research, trust & safety, or related fields
Is comfortable scripting basic tests (Python, Bash, or similar) and working in Jupyter or prompt-engineering tools
Communicates clearly in English and at least one additional language (ideally major non-English language relevant to global threat landscapes)
Thinks like an adversary, documents findings crisply, and iterates quickly
Preferred
Full professional proficiency in Arabic, Chinese, Farsi, Portuguese, Russian, or Spanish, as well as English
Prior work in content moderation, disinformation analysis, or cyber-threat intelligence
Experience with prompt-automation frameworks (e.g., Promptfoo, LangChain, Garak)
Familiarity with vector search or LLM fine-tuning workflows
Formal training or certification in red-teaming or penetration testing
Benefits
Opportunity for spot bonuses and annual performance-based bonus.
Fully remote (U.S.-based) with flexible hours.
Comprehensive health, dental, and vision.
Generous PTO and paid holidays.
401(k) plan.
Professional-development stipend for courses, conferences, or language study.
Company
10a Labs
10a Labs is an applied research and technology company specializing in AI security.
Funding
Current Stage
Early StageCompany data provided by crunchbase