Together AI · 5 months ago
LLM Training Dataset and Checkpoint Optimization Engineer
Together.ai is a leader in developing AI infrastructure that powers the training of state-of-the-art models. They are seeking a Training Dataset and Checkpoint Acceleration Engineer to optimize data pipelines and checkpoint mechanisms for large-scale machine learning workloads, ensuring high performance and reliability in training workflows.
Artificial Intelligence (AI)Generative AIInternetIT InfrastructureOpen Source
Responsibilities
Design and optimize high-throughput data pipelines for streaming and processing massive training datasets
Implement caching, sharding, and prefetching techniques to maximize data-loading efficiency
Ensure efficient integration with distributed storage systems (e.g., S3, GCS, Lustre, Ceph)
Build and optimize distributed checkpoint mechanisms for large-scale training workflows
Implement techniques to minimize checkpoint I/O overhead and ensure fault tolerance
Develop incremental and differential checkpointing solutions to reduce storage costs
Profile and debug bottlenecks in data pipelines and checkpoint systems
Optimize for GPU/TPU utilization by ensuring efficient data feeding and checkpoint recovery times
Develop systems that scale efficiently across thousands of nodes and petabyte-scale datasets
Ensure fault-tolerant recovery and resume mechanisms for long-running training jobs
Work closely with ML researchers, data engineers, and infrastructure teams to understand workload requirements
Build tools and frameworks to enable seamless integration of dataset and checkpointing systems with existing ML workflows
Qualification
Required
5+ years of experience in data engineering, distributed systems, or ML infrastructure
Expertise in high-performance data processing libraries (e.g., PyTorch DataLoader, TensorFlow Data, DALI)
Proficiency in distributed storage systems and data formats (e.g., Parquet, HDF5)
Strong understanding of checkpointing frameworks and file systems (e.g., POSIX, Lustre, GPFS)
Proficient in Python, C++, or Go for performance-critical systems
Experience with I/O optimization techniques (e.g., asynchronous data loading, prefetching)
Familiarity with compression and serialization for large datasets and checkpoints
Analytical and problem-solving mindset
Strong communication and collaboration skills across teams
Preferred
Experience with ML frameworks (e.g., PyTorch, TensorFlow, JAX) and distributed training
Familiarity with hardware accelerators (e.g., GPUs, TPUs) and storage optimizations
Knowledge of open-source contributions or projects related to data pipelines or checkpointing
Experience with incremental and real-time checkpointing solutions
Benefits
Health insurance
Startup equity
Other competitive benefits
Company
Together AI
Together AI is a cloud-based platform designed for constructing open-source generative AI and infrastructure for developing AI models.
H1B Sponsorship
Together AI has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (19)
2024 (6)
2023 (3)
Funding
Current Stage
Growth StageTotal Funding
$533.5MKey Investors
Salesforce VenturesLux Capital
2025-02-20Series B· $305M
2024-03-13Series A· $106M
2023-11-29Series A· $102.5M
Leadership Team
Recent News
2025-11-27
Company data provided by crunchbase