SSi People · 15 hours ago
AI Safety Researcher
SSi People is seeking an AI Safety Researcher to enhance their efforts in AI safety. The role involves working with a cross-functional team to ensure the safety and trustworthiness of AI features through hands-on research and testing.
Responsibilities
Lead adversarial testing/red-teaming campaigns to identify material gaps, focusing on robust and scalable system alignment (e.g., Preference Tuning, automatic prompt optimization)
Qualification
Required
Must be local to NYC to be able to come onsite once a week – Hybrid/Remote
W2 ONLY – we are not able to sponsor now or in the future
10+ years of overall experience in Information Technology
3+ years in AI expertise
Hands-on experience with LLMs and prompt/context engineering
Proven experience contributing to safety-related projects or research (e.g., adversarial testing, system alignment)
Strong proficiency in Python, Java, and SQL
Preferably pursuing or holding an MSc or PhD in an AI/ML-related field, with a focus on safety or agentic systems
Experience working with cross-language models
Core Expertise: Safety Research and advanced model alignment techniques
Lead adversarial testing/red-teaming campaigns to identify material gaps, focusing on robust and scalable system alignment (e.g., Preference Tuning, automatic prompt optimization)
Company
SSi People
SSi People, located in Cranberry Township, PA, is an organization with over 25 years of staffing industry experience across various labor verticals in the United States, serving Fortune 1000 companies.
Funding
Current Stage
Growth StageRecent News
Company data provided by crunchbase