Red Teaming Lead, Responsibility jobs in United States
cer-icon
Apply on Employer Site
company-logo

Google DeepMind · 2 months ago

Red Teaming Lead, Responsibility

Google DeepMind is a team of scientists and engineers working to advance artificial intelligence for public benefit. As a Red Teaming Lead, you will manage and grow the frontier risk red teaming program, conducting hands-on exercises to identify and mitigate risks associated with advanced AI models.

Artificial Intelligence (AI)Business DevelopmentFoundational AIMachine Learning
check
Growth Opportunities

Responsibilities

Leading and managing the end-to-end responsibility & safety red teaming programme for Google DeepMind
Designing and implementing expert red teaming of advanced AI models to identify risks, vulnerabilities, and failure modes across emerging risk areas such as CBRNe, Cyber and socioaffective behaviors
Partnering with external red teamers and specialist groups to design and execute novel red teaming exercises
Collaborating closely with product and engineering teams to design and develop innovative red teaming tooling and infrastructure
Converting high-level risk questions into detailed testing plans, and implementing those plans, influencing others to support as necessary
Working collaboratively alongside a team of multidisciplinary specialists to deliver on priority projects and incorporate diverse considerations into projects
Communicating findings and recommendations to wider stakeholders across Google DeepMind and beyond
Providing an expert perspective on AI risks, testing methodologies, and vulnerability analysis in diverse projects and contexts

Qualification

Red teaming experienceUnderstanding AI risksTechnical understanding of AIProgram management skillsCollaboration skillsCommunication skillsTesting approach analysisHands-on safety evaluationsExperimentation techniquesAgile product developmentSociotechnical considerationsFast-paced adaptability

Required

Demonstrated experience running or managing red teaming or novel testing programs, particularly for AI systems
A strong, comprehensive understanding of sociotechnical AI risks from recognized systemic risks to emergent risk areas
A solid technical understanding of how modern AI models, particularly large language models, are built and operate
Strong program management skills with a track record of successfully delivering complex, cross-functional projects
Demonstrated ability to work within cross-functional teams, fostering collaboration, and influencing outcomes
Ability to present complex technical findings to both technical and non-technical teams, including senior stakeholders
Ability to thrive in a fast-paced environment, and an ability to pivot to support emerging needs
Demonstrated ability to identify and clearly communicate challenges and limitations in testing approaches and analyses

Preferred

Direct, hands-on experience in safety evaluations and developing mitigations for advanced AI systems
Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red-teaming, and content rating processes
Experience working with product development or in similar agile settings
Familiarity with sociotechnical and safety considerations of generative AI, including systemic risk domains identified in the EU AI Act (chemical, biological, radiological, and nuclear; cyber offense; loss of control; harmful manipulation)

Benefits

Bonus
Equity
Benefits

Company

Google DeepMind

company-logo
Google DeepMind aims to research and build safe artificial intelligence system to solve intelligence and advance science and humanity. It is a sub-organization of Google.

Funding

Current Stage
Late Stage
Total Funding
unknown
2014-01-26Acquired
2011-02-01Series A

Leadership Team

leader-logo
Demis Hassabis
Co-Founder & CEO
linkedin
leader-logo
Aaron Saunders
VP of Hardware Engineering, Robotics
linkedin
Company data provided by crunchbase