hud (YC W25) ยท 4 months ago
Research Engineer, Agentic AI Evals
HUD (YC W25) is developing agentic evals for Computer Use Agents (CUAs) that browse the web. They are seeking a research engineer to help build out task configurations and environments for evaluation datasets on HUD's CUA evaluation framework.
Computer Software
Responsibilities
Build out environments for HUD's CUA evaluation datasets, including evals for safety redteaming, general business tasks, long-horizon agentic tasks etc
Deliver custom CUA datasets and evaluation pipelines requested by clients
Contribute to improving the HUD evaluation harness, depending on your interests, skills, and current organizational priorities. (Optional, but highly valued!)
Qualification
Required
Proficiency in Python, Docker, and Linux environments
React experience for frontend development
Production-level software development experience preferred
Strong technical aptitude and demonstrated problem-solving ability
Preferred
Startup experience in early-stage technology companies with ability to work independently in fast-paced environments
Strong communication skills for remote collaboration across time zones
Familiarity with current AI tools and LLM capabilities
Understanding of safety and alignment considerations in AI systems
Evidence of rapid learning and adaptability in technical environments (e.g. programming competitions)
Have hands-on experience with or contributed to LLM evaluation frameworks (EleutherAI, Inspect, or similar)
Built custom evaluation pipelines or datasets
Worked with agentic or multimodal AI evaluation systems
Benefits
Visa Sponsorship
Support for relocation and visas for strong full-time candidates to USA or Singapore
Company
hud (YC W25)
The all-in-one platform for evaluations on computer use and browser use AI agents.
Funding
Current Stage
Early StageCompany data provided by crunchbase