bem · 4 months ago
Platform Engineer
bem is a company focused on building the infrastructure layer for modern enterprise workflows, aiming to automate unstructured data processes with high accuracy through their AI platform. They are seeking a Platform Engineer to architect data and compute infrastructure, solve infrastructure challenges, and collaborate with a veteran team to transform business operations.
Artificial Intelligence (AI)Enterprise Software
Responsibilities
Own the Data Pipeline: You'll be the architect of how data flows through our systems—from ingestion to processing to delivery. Your expertise in schemas, CAP tradeoffs, and transactional guarantees will directly impact customer value
Solve Uncharted Infrastructure Problems: The infrastructure challenges of scaling AI systems are an open sea. You won't be implementing solved problems; you'll be defining the future of scalable GPU compute and AI infrastructure on the cloud and at the edge. No CUDA, shoulda, woulda here!
Work with a Veteran Team: You will be working alongside, and learning from, seasoned experts who have built and scaled products to hundreds of millions of users across uses as varied as SaaS for agriculture and grocery stores, AI agents in healthcare, cloud infrastructure, and AI code co-pilots (pre-transformer models!)
Architect Data Infrastructure: Design and build robust data pipelines that handle stochastic data with transactional guarantees. Automate DDL evolution, ensure low latency and high durability, and make complex data simple for our customers
Build Multi-Cloud Foundations: Deploy and manage infrastructure across AWS, Azure, and GCP—this is table stakes for our enterprise distribution strategy
Infrastructure as Code: Everything is code. Use Terraform (OpenTofu) and Pulumi to create repeatable, versioned, and auditable infrastructure
Security-First Mindset: Foster a security-first culture that protects user data and reduces GTM friction—from software supply chain security to VPC configurations that satisfy highly secure enterprise requirements
Automate Everything: Apply a software engineering mindset to eliminate manual toil and build self-healing systems
Collaborate and Educate: Work as an embedded expert within the engineering team, guiding best practices in reliability, scalability, and observability
Qualification
Required
Deep understanding of data systems, including schema design and evolution
CAP theorem tradeoffs in practice
Stream processing and batch processing patterns
Experience with modern data warehouses (e.g. Snowflake, Databricks, ClickHouse, or Motherduck) and relational database engines
Hands-on experience with at least two major cloud platforms (AWS, Azure, GCP) and understanding their nuanced differences
Proven experience componentizing and managing infrastructure through IaC tools
Practical experience with infrastructure security, including container security, network isolation, and meeting enterprise compliance requirements
Strong expertise in modern cloud networking concepts, including Virtual Network Design, VPC peering, VPN management and the implementation of Zero Trust principles
Strong coding skills and ability to build tools, not just configure them
Preferred
Experience with GPU compute infrastructure and ML model serving
Familiarity with edge computing and hybrid cloud architectures
Background in high-throughput, low-latency systems
Experience with observability platforms and distributed tracing
Contributions to open-source infrastructure projects
Benefits
Competitive comp, including early equity.
Hybrid work environment (3 days in-office) with flexibility to balance work and life.
Company
bem
Ambient AI computing layer for modern software, transforming any input into structured, real-time understanding
Funding
Current Stage
Early StageTotal Funding
$3.7MKey Investors
Uncork Capital
2024-06-06Seed· $3.7M
Recent News
2024-12-16
Company data provided by crunchbase