hud (YC W25) · 5 months ago
Research Engineer, Agentic AI Evals
HUD (YC W25) is developing agentic evals for Computer Use Agents and is seeking a research engineer to build task configurations and environments for evaluation datasets. The role involves delivering custom datasets, contributing to the evaluation harness, and working closely with clients to meet their needs.
Computer Software
Responsibilities
Build out environments for HUD's CUA evaluation datasets, including evals for safety redteaming, general business tasks, long-horizon agentic tasks etc
Deliver custom CUA datasets and evaluation pipelines requested by clients
Contribute to improving the HUD evaluation harness, depending on your interests, skills, and current organizational priorities. *(Optional, but highly valued!)*
Qualification
Required
Proficiency in Python, Docker, and Linux environments
React experience for frontend development
Strong technical aptitude and demonstrated problem-solving ability
Preferred
Production-level software development experience preferred
Startup experience in early-stage technology companies with ability to work independently in fast-paced environments
Strong communication skills for remote collaboration across time zones
Familiarity with current AI tools and LLM capabilities
Understanding of safety and alignment considerations in AI systems
Evidence of rapid learning and adaptability in technical environments (e.g. programming competitions)
Have hands-on experience with or contributed to LLM evaluation frameworks (EleutherAI, Inspect, or similar)
Built custom evaluation pipelines or datasets
Worked with agentic or multimodal AI evaluation systems
Contribute to improving the HUD evaluation harness, depending on your interests, skills, and current organizational priorities. *(Optional, but highly valued!)*
Company
hud (YC W25)
The all-in-one platform for evaluations on computer use and browser use AI agents.
Funding
Current Stage
Early StageCompany data provided by crunchbase