Tenth Revolution Group · 1 week ago
Databricks Architect
Tenth Revolution Group is seeking a Data Architect (Databricks) with deep expertise in enterprise data platform design and governance to lead high-priority, client-facing implementations. This role involves defining reference architectures, security frameworks, and cost-management strategies while overseeing the development and optimization of scalable data pipelines and products.
Human ResourcesRecruitingSoftwareStaffing Agency
Responsibilities
Define the enterprise data platform architecture (e.g., Lakehouse/EDW), utilizing industry-standard reference patterns such as Medallion, Lambda, or Kappa
Lead the technology selection process and design the comprehensive integration blueprint for the data ecosystem
Design conceptual, logical, and physical data models to support multi-tenant and vertical-specific data products
Standardize logical data layers across the organization, including Ingestion/Raw, Staged/Curated, and Serving layers
Establish data governance frameworks, including metadata management, data cataloging, lineage tracking, and data contracts
Define security and compliance controls, covering RBAC/IAM, data masking, encryption, network segmentation, and audit policies
Design and implement classification practices to support downstream Analytics and Machine Learning use cases
Architect for scalability and high availability, defining disaster recovery (RPO/RTO) and cost management strategies for cloud/hybrid environments
Oversee the integration of platform components, including modern data lake technologies, ETL/ELT tools, and orchestration engines
Define ingestion patterns (batch and streaming), storage optimization strategies (partitioning, compaction), and IO cost-efficiency measures
Specify observability practices, including health dashboards, structured logging, and alerting for data pipelines
Design and enforce CI/CD patterns for data artifacts, including infrastructure-as-code (IaC), automated testing, and deployment/rollback strategies
Develop and optimize distributed processing modules using languages like Python or Scala, and advanced SQL
Build and tune data pipelines across various distributed computing engines and relational database systems
Implement automated testing and version control for all data transformations and infrastructure
Act as the technical authority and mentor for the Data Engineering organization; lead architecture and code reviews for mission-critical components
Collaborate with cross-functional stakeholders (Product, Security, Infrastructure, BI, and ML) to translate business needs into a technical roadmap
Troubleshoot complex data quality and performance issues, driving a culture of continuous improvement
Qualification
Required
Deep expertise in enterprise data platform design and governance
Experience in defining reference architectures, security frameworks, and cost-management strategies
Ability to oversee the development and optimization of scalable data pipelines and products
Experience in defining the enterprise data platform architecture (e.g., Lakehouse/EDW)
Knowledge of industry-standard reference patterns such as Medallion, Lambda, or Kappa
Experience in leading the technology selection process and designing comprehensive integration blueprints for data ecosystems
Ability to design conceptual, logical, and physical data models to support multi-tenant and vertical-specific data products
Experience in standardizing logical data layers across the organization, including Ingestion/Raw, Staged/Curated, and Serving layers
Ability to establish data governance frameworks, including metadata management, data cataloging, lineage tracking, and data contracts
Experience in defining security and compliance controls, covering RBAC/IAM, data masking, encryption, network segmentation, and audit policies
Ability to design and implement classification practices to support downstream Analytics and Machine Learning use cases
Experience in architecting for scalability and high availability, defining disaster recovery (RPO/RTO) and cost management strategies for cloud/hybrid environments
Ability to oversee the integration of platform components, including modern data lake technologies, ETL/ELT tools, and orchestration engines
Experience in defining ingestion patterns (batch and streaming), storage optimization strategies (partitioning, compaction), and IO cost-efficiency measures
Ability to specify observability practices, including health dashboards, structured logging, and alerting for data pipelines
Experience in designing and enforcing CI/CD patterns for data artifacts, including infrastructure-as-code (IaC), automated testing, and deployment/rollback strategies
Ability to develop and optimize distributed processing modules using languages like Python or Scala, and advanced SQL
Experience in building and tuning data pipelines across various distributed computing engines and relational database systems
Ability to implement automated testing and version control for all data transformations and infrastructure
Experience acting as the technical authority and mentor for the Data Engineering organization
Ability to lead architecture and code reviews for mission-critical components
Experience collaborating with cross-functional stakeholders (Product, Security, Infrastructure, BI, and ML) to translate business needs into a technical roadmap
Ability to troubleshoot complex data quality and performance issues, driving a culture of continuous improvement
Company
Tenth Revolution Group
Tenth Revolution Group is the global leader in cloud talent solutions, uniquely equipped to deliver digital transformation through people.
Funding
Current Stage
Late StageTotal Funding
unknown2016-07-29Debt Financing
2016-04-28Acquired
Recent News
2025-09-07
ComputerWeekly.com
2024-12-13
No support or updates for Windows 11 on machines not meeting minimum hardware requirements, says Microsoft | CIO
2024-12-13
Company data provided by crunchbase