eSimplicity · 1 day ago
Databricks & Python Technical Lead
eSimplicity is a modern digital services company that partners with government clients to improve lives and ensure security. The Databricks & Python Technical Lead will oversee technical implementation and development of data solutions, guide the team in building scalable analytics solutions, and ensure adherence to technical standards.
Health CareInformation TechnologySoftwareTelecommunications
Responsibilities
Leads the technical implementation and development of data solutions using Databricks and Python, ensuring best practices in code development, and guiding the team in building scalable, efficient analytics solutions
Ensures adherence to the technical approach and meets deployment/security standards
Identifies and owns all technical solution requirements in developing enterprise-wide data architecture
Provides subject matter expertise on data and data pipeline architecture and leads the decision process to identify the best options
Assemble large, complex data sets that meet non-functional and functional business requirements
Implements, with the support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations
Develops and maintains data pipelines using Databricks; develops and updates Extract, Transfer, Load (ETL) processes
Creates project-specific technical design, product and vendor selection, application, and technical architectures
Serves as the owner of complex data architectures, with an eye toward constant reengineering and refactoring to ensure the simplest and most elegant system possible to accomplish the desired need
Ensure strategic alignment of technical design and architecture to meet business growth and direction and stay on top of emerging technologies
Responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams
Support software developers, database architects, data analysts, and data scientists on data initiatives and ensure that the optimal data delivery architecture is consistent throughout ongoing projects
Identify, design, and implement internal process improvements, including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes
Building required infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and SQL technologies
Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics, including operational efficiency and customer acquisition?
Working with stakeholders, including data, design, product, and government stakeholders, and assisting them with data-related technical issues
Write unit and integration tests for all data processing code
Work with DevOps engineers on CI, CD, and IaC
Read specs and translate them into code and design documents
Perform code reviews and develop processes for improving code quality
Qualification
Required
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation
Bachelor's degree in Computer Science, Engineering, or a related technical field; OR
In lieu of a degree, 10 additional years of relevant professional experience and 8 years of specialized experience may be substituted
10+ years of total professional experience in the technology or data engineering field
Extensive Data pipeline experience using Databricks (Python/R/Spark)
Data wrangler who enjoys optimizing data systems and building them from the ground up
Real-world data/statistics and excellent analytic skills associated with working on unstructured datasets
Experience implementing statistical data visualization and BI Tools (PowerBI, AWS Quicksight)
Self-sufficient and comfortable supporting the data needs of multiple teams, systems, and products
Experienced in designing data architecture for shared services, scalability, and performance
Experienced in designing data services including API, metadata, and data catalog
Experienced in data governance process to ingest (batch, stream), curate, and share data with upstream and downstream data users
Ability to build and optimize data sets, ‘big data' data pipelines, and architecture
Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions
Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata
Flexible and willing to accept a change in priorities as necessary
Ability to work in a fast-paced, team-oriented environment
Experience with Agile methodology, using test-driven development
Experience with Atlassian Jira/Confluence
Excellent command of written and spoken English
Preferred
Experience with healthcare quality data, including Medicaid and CHIP provider data, beneficiary data, claims data, and quality measure data
PySpark/Spark experience
Experience supporting CMS
Experience with AWS, MWAA (Airflow), EMR, Standard services: S3, EC2, Secrets Manager, SNS, CloudWatch, Version Control/Github, Python, and Java
Benefits
Highly competitive salary
Full healthcare benefits
Company
eSimplicity
eSimplicity delivers game-changing digital services, healthcare IT and telecommunications solutions.
Funding
Current Stage
Growth StageRecent News
Synergy ECP, LLC
2025-10-09
Company data provided by crunchbase