SIGN IN
Vision/SLAM Engineer — Embodied Robotics jobs in United States
info-icon
This job has closed.
company-logo

ndimensions labs · 4 months ago

Vision/SLAM Engineer — Embodied Robotics

Ndimensions labs is a team of technologists from prestigious universities aiming to turn deep tech into products. They are seeking a Vision/SLAM-focused Robotics Software Engineer to develop and implement advanced perception and mapping systems for intelligent robotic behavior.
Computer Software

Responsibilities

Own the mapping stack : design and ship visual SLAM pipelines (front-end + back-end) with robust tracking, loop closure, and relocalization under tight latency/compute budgets
Build semantic maps : fuse geometry with semantics into multi-layer maps usable by planners and policies
World models : develop learned predictive/latent-state models that capture scene dynamics and uncertainty; integrate them with control and task policies
Multi-sensor fusion : calibrate and fuse RGB/RGB-D/LiDAR/IMU/wheel odometry; handle time sync, extrinsics, and degraded sensing
Representation learning : adapt ViTs/VLMs for segmentation, detection, tracking, place recognition, and 3D understanding; learn scene graphs and object-centric representations
Advance the stack : explore beyond current VLAs (OpenVLA/RT-2/RT-X), adapt ViTs (DINO, SAM), VLMs (CLIP, BLIP-2, LLaVA), and diffusion planners (UniPi, Diffusion Policy) for mapping-aware control

Qualification

SLAM expertiseC++PythonMulti-sensor fusionSemantic mappingWorld-modelingCV backgroundCUDA/TensorRTROS2Data curationSim + real

Required

SLAM expertise: visual/VIO/VSLAM experience (feature- or direct-based), bundle adjustment, factor graphs, pose-graph optimization, loop closure, place recognition, robust estimation
Semantic mapping: panoptic/instance segmentation, 2D-to-3D lifting, multi-layer map fusion, uncertainty modeling, lifelong/incremental mapping
World-modeling: learned state-space models, dynamics prediction
Strong CV & multimodal background: transformer-based models, self-supervised learning, tracking, foundation model adaptation for robotics
Engineering: C++ and Python; CUDA/TensorRT a plus; ROS2; strong profiling/latency discipline; productionizing perception systems on robots
Data: curation/augmentation for robotics; evaluation protocols
Sim + real: Isaac/MuJoCo/Habitat and on-robot bring-up; optimization libs (Ceres, GTSAM), geometric libs (OpenCV, Open3D)

Preferred

Differentiable SLAM or neural fields (NeRF/3DGS) integrated with classical stacks
Active perception, task-driven exploration, or belief-space planning
Publications at top venues (CVPR/ICCV/ECCV/CoRL/RSS/ICRA/IROS)
Experience with large-scale multi-robot mapping or map compression/streaming

Company

ndimensions labs

twitter
company-logo
From deep science to global scale, we bridge the gap between groundbreaking research and products.

Funding

Current Stage
Early Stage
Company data provided by crunchbase