Zendar · 1 day ago
Senior Research Engineer - Perception & Foundation Models
Zendar is a company focused on developing advanced radar-based vehicular perception systems for automotive. They are seeking a Senior Machine Learning Research Engineer to design and implement multi-sensor perception models that fuse camera and radar data for real-world autonomy applications.
AutomotiveNavigation
Responsibilities
Own architecture and technical strategy for multi-sensor perception models, including explicit tradeoffs (why approach A vs B), risks, validation plans, and timelines
Build foundation-scale / transformer-based perception models trained from scratch on large-scale multi-modal driving datasets (not limited to fine-tuning)
Develop fusion architectures for streaming multi-sensor inputs (camera/radar/lidar), with early fusion and temporal fusion; align training objectives to real-world reliability targets
Deliver production-ready models for: Occupancy / free-space / dynamic occupancy (full-scene understanding), 3D Object detection and tracking, Lane line / road structure estimation
Drive long-tail reliability (e.g., toward “four nines” behavior in defined conditions)
Partner with platform/embedded teams to ensure models meet real-time constraints (latency, memory, throughput) and integrate cleanly via stable interfaces for downstream consumers
Qualification
Required
Deep expertise in deep learning for perception, especially transformer-based architectures, temporal modeling, and multi-modal learning
Proficiency with Python and a major deep learning framework (e.g., PyTorch, TensorFlow)
5+ years (or having a PhD) experience designing and implementing ML systems, with demonstrated ownership of research/production outcomes
Demonstrated experience training large models from scratch (not only fine-tuning)
Strong experience with multi-sensor fusion (camera/radar/lidar) and real-world sensor
Strong understanding of the end-to-end perception stack and downstream needs (interfaces, uncertainty, temporal stability, failure modes)
Ability to lead architectural discussions: articulate tradeoffs, quantify risks/benefits, and set realistic milestones and timelines
Preferred
PhD in a relevant field (Machine Learning, Computer Vision, Robotics) preferred
Experience with foundation models for autonomy and robotics, including multi-modal pretraining, self-supervised learning, and scaling laws / model scaling strategies
Experience with transfusion-style or related fusion paradigms (transformer-based fusion across modalities and time), including building from first principles
Experience with BEV-centric perception, 3D detection, occupancy networks, tracking, and streaming inference
Benefits
Benefits including medical, dental, and vision insurance
Flexible PTO
Equity
Daily catered lunch and a stocked fridge (when working out of the Berkeley, CA office)
Company
Zendar
Zendar develops high-definition radar for autonomous vehicles.
H1B Sponsorship
Zendar has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (2)
2024 (2)
2023 (1)
2022 (6)
2021 (1)
Funding
Current Stage
Growth StageTotal Funding
$22.55MKey Investors
NXP SemiconductorsHyundai MobisKhosla Ventures
2023-11-02Series Unknown
2022-01-27Series B· $4M
2021-06-18Series Unknown· $8M
Recent News
FundersClub
2026-01-21
Google Patent
2025-04-02
Google Patent
2025-02-08
Company data provided by crunchbase