Residex® · 2 days ago
Data Engineer
Residex® is focused on improving the quality of life for seniors and the caregivers who serve them. They are seeking a Data Engineer to design and build data products in Snowflake, architect scalable ETL/ELT pipelines, and collaborate with cross-functional teams to enhance data usability and quality for analytics and insights.
Consumer SoftwareSoftware
Responsibilities
Design and build data products in Snowflake that serve as the foundation for analytics, dashboards, and operational insights—treating data models as intentional products with clear interfaces, documentation, and performance SLAs that enable downstream teams to build with confidence
Architect scalable ETL/ELT pipelines extracting data from production SQL Server databases and transforming it into analytics-ready data products in Snowflake, ensuring reliability, accuracy, and usability across thousands of healthcare communities
Collaborate deeply with the Data UI/UX Developer and domain experts to understand how data will be consumed and design data models that anticipate analytics needs, reduce friction, and enable self-service exploration rather than just meeting immediate requirements
Translate business logic embedded in legacy application code and stored procedures into maintainable, well-documented data layer transformations using modern tools such as dbt, ensuring business rules are accurate, auditable, and positioned as reusable data products
Build dimensional data models in Snowflake including star and snowflake schemas that balance query performance, analytical flexibility, and maintainability—designing data structures that empower rather than constrain downstream analytics development
Champion data product thinking by establishing clear data contracts, semantic definitions, and quality guarantees that give BI developers, analysts, and business users confidence in the data they're building on
Implement DevOps best practices for data pipelines including version control (Git), CI/CD automation, infrastructure as code, monitoring, and alerting to ensure data products are deployed reliably and evolve safely as requirements change
Establish and maintain data quality frameworks including validation rules, reconciliation processes, and automated testing to ensure analytics products meet healthcare industry accuracy standards and data consumers can trust the foundation they're building on
Document data lineage, transformation logic, and business rules using tools like Dataedo or equivalent data catalog platforms, creating living documentation that helps downstream teams understand what data means, where it comes from, and how to use it effectively
Work closely with the Data Architect to implement warehouse architecture decisions including schema design, indexing strategies, partitioning, and query optimization that support sub-second dashboard response times and enable scalable self-service analytics
Optimize pipeline performance and cost efficiency in Snowflake through query tuning, materialized views, clustering, and efficient data loading patterns while maintaining the usability and accessibility of data products
Engage with data consumers (Data UI/UX Developer, analysts, domain experts) to gather feedback on data product usability, identify pain points in data access or structure, and continuously evolve data models to better serve their workflows
Support data governance initiatives including implementing access controls, audit logging, and HIPAA-compliant data handling practices for protected health information (PHI) while ensuring appropriate data discoverability and access for authorized users
Qualification
Required
5+ years of experience building production data pipelines and data products at scale, with demonstrated ability to design data models that serve downstream analytics and reporting needs effectively
Strong data product mindset—you understand that data engineering isn't just about moving data, it's about creating reliable, well-documented, usable data assets that enable others to build analytics products that improve how people work
Deep SQL development skills including complex queries, window functions, CTEs, stored procedures, and performance optimization for both SQL Server (source) and Snowflake (target)
Hands-on experience with Snowflake architecture including warehouses, databases, schemas, stages, streams, tasks, and understanding of Snowflake-specific optimization techniques
Strong proficiency with ETL/ELT tools, with dbt strongly preferred for transformation logic, version control, testing, and documentation as a data product development framework
Demonstrated DevOps mindset including experience with Git workflows, CI/CD pipelines (GitHub Actions, GitLab CI, or similar), infrastructure automation, and deployment best practices
Experience designing dimensional models (star schema, snowflake schema, slowly changing dimensions) that balance analytical flexibility with query performance and are intuitive for BI developers and analysts to consume
Track record of collaborating with BI developers, analysts, and business stakeholders to understand how data will be used and designing models that enable rather than constrain downstream analytics development
Strong data quality orientation with experience implementing validation frameworks, reconciliation processes, and automated testing to ensure data products meet accuracy standards and inspire user confidence
Experience with data documentation and lineage tools such as Dataedo, Atlan, Alation, or similar data catalog platforms for creating accessible, maintainable documentation that helps data consumers understand and use data effectively
Understanding of data contracts, semantic layers, and data product interfaces that establish clear expectations between data producers and consumers
Excellent collaboration and communication skills with ability to translate technical data concepts into terms that resonate with BI developers, analysts, and business users
Strong documentation skills for capturing not just what data models contain, but why they're structured the way they are and how downstream teams should use them
Experience extracting data from SQL Server including understanding of change data capture (CDC), incremental loading patterns, and handling large-scale data migrations
Comfortable working independently while actively seeking feedback from data consumers to continuously improve data product usability and effectiveness
Preferred
Healthcare or senior living industry experience strongly preferred; familiarity with EHR data structures, clinical workflows, regulatory compliance requirements, and PHI data handling under HIPAA
Experience with real-time data pipelines, streaming architectures (Kafka, event-driven patterns), or message queuing systems is a plus
Familiarity with Python or other scripting languages for data processing, automation, and pipeline orchestration is a plus
Benefits
Purpose That Matters: Be part of a mission that directly impacts the quality of life for seniors and the caregivers who serve them.
Real Platform Ownership: Build the data foundation that powers analytics and insights for thousands of healthcare communities nationwide—creating data products that enable teams to deliver life-changing insights to healthcare professionals.
High-Trust Leadership: Work shoulder-to-shoulder with a CPO and technical lead who value autonomy, vision, and results.
Rapid Growth, Real Impact: Join at a high-growth inflection point with the resources, customers, and market tailwinds to go big.
A Culture of Craft and Care: We take pride in what we build and care deeply about the people we build it for—and with.
Company
Residex®
Residex® delivers Intelligent Care through a secure, all-in-one assisted living software platform that empowers caregivers with data-driven tools for smarter, safer, and more efficient care.
Funding
Current Stage
Early StageTotal Funding
unknown2025-01-22Acquired
Recent News
2025-01-24
Company data provided by crunchbase