Research Intern (Vision/VLM) jobs in United States
cer-icon
Apply on Employer Site
company-logo

2077AI Open Source Foundation ยท 1 week ago

Research Intern (Vision/VLM)

2077AI Open Source Foundation is seeking a Research Intern to support their work in video understanding and multimodal reasoning. The intern will contribute to datasets, benchmarks, and experiments aimed at enhancing video-language modeling and temporal understanding.

Computer Software

Responsibilities

Build and refine datasets for video understanding and multimodal reasoning, including temporal QA, action recognition, event prediction, and spatial understanding
Evaluate video-language models (Video-LLMs) and audio-visual datasets, including those derived from large-scale sources such as HowTo100M
Conduct experiments analyzing long-context modeling efficiency, compression strategies, and data optimization techniques
Contribute to benchmark standardization efforts and assist in setting up public leaderboards for evaluation and comparison

Qualification

Computer VisionVideo AnalyticsMultimodal LearningVideo Data ProcessingTransformer ModelsVideo-QAAction RecognitionMultimodal ReasoningRelevant Publications

Required

Strong background in computer vision, video analytics, or multimodal learning
Proficient in building and managing video data processing pipelines
Understanding of transformer-based temporal models (e.g., TimeSformer, VideoGPT, etc.)

Preferred

Experience with video-QA, action recognition, or multimodal reasoning datasets
Relevant publications in top-tier conferences

Company

2077AI Open Source Foundation

twitter
company-logo
The 2077AI Foundation, is at the forefront of AI data standardization and progression.

Funding

Current Stage
Growth Stage
Company data provided by crunchbase