2077AI Open Source Foundation · 2 weeks ago
Research Intern (LLM)
2077AI Open Source Foundation is looking for a Research & Evaluation Intern to help build advanced QA datasets and evaluate large language models. This role is ideal for students passionate about LLMs, evaluation science, and the intersection of research and applied data work.
Computer Software
Responsibilities
Design and construct high-quality, sufficiently challenging QA datasets (graduate/PhD level) inspired by GPQA, HLE, and AI4Sci families, collaborating with a global network of talented researchers
Evaluate large language models on reasoning, factuality, and problem-solving benchmarks
Develop review pipelines and quality-control criteria for expert-level question generation
Analyze model outputs, conduct error taxonomy studies, and summarize insights for internal reports and research papers
Collaborate with the 2077AI Foundation’s open-source benchmark teams on public dataset releases
Qualification
Required
Strong background in computer science, data engineering, artificial intelligence, or related fields, with hands-on experience in large-scale data systems
1+ years of experience with LLMs, prompt engineering, and evaluation frameworks (e.g., LM Eval Harness, OpenCompass)
Excellent written and verbal English skills and analytical reasoning
Strong execution and team management skills—able to translate high-level objectives into actionable plans and drive team outcomes
Preferred
Experience with formal methods, chain-of-thought evaluation, or curriculum generation
Relevant publications in top conferences
Company
2077AI Open Source Foundation
The 2077AI Foundation, is at the forefront of AI data standardization and progression.
Funding
Current Stage
Growth StageCompany data provided by crunchbase