eSimplicity · 4 months ago
Data Engineer III
eSimplicity is a modern digital services company that partners with government agencies to improve the lives and protect the well-being of all Americans. The Data Engineer III will be responsible for developing and optimizing data architecture and pipelines, supporting various teams in data initiatives, and ensuring high-quality data delivery throughout projects.
Health CareInformation TechnologySoftwareTelecommunications
Responsibilities
Responsible for developing, expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams
Support software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects
Creates new pipeline and maintains existing pipeline, updates Extract, Transform, Load (ETL) process, creates new ETL feature , builds PoCs with Redshift Spectrum, Databricks, AWS EMR, SageMaker, etc
Implements, with support of project data specialists, large dataset engineering: data augmentation, data quality analysis, data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations
Operate large-scale data processing pipelines and resolve business and technical issues pertaining to the processing and data quality
Assemble large, complex sets of data that meet non-functional and functional business requirements
Identify, design, and implement internal process improvements including re-designing data infrastructure for greater scalability, optimizing data delivery, and automating manual processes ?
Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies
Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition?
Working with stakeholders including data, design, product and government stakeholders and assisting them with data-related technical issues
Write unit and integration tests for all data processing code
Work with DevOps engineers on CI, CD, and IaC
Read specs and translate them into code and design documents
Perform code reviews and develop processes for improving code quality
Perform other duties as assigned
Qualification
Required
All candidates must pass public trust clearance through the U.S. Federal Government. This requires candidates to either be U.S. citizens or pass clearance through the Foreign National Government System which will require that candidates have lived within the United States for at least 3 out of the previous 5 years, have a valid and non-expired passport from their country of birth and appropriate VISA/work permit documentation
Minimum of 8 years of previous Data Engineer or hands on software development experience with at least 4 of those years using Python, Java and cloud technologies for data pipelining
A Bachelor's degree in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline. With ten years of general information technology experience and at least eight years of specialized experience, a degree is NOT required
Expert data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up
Self-sufficient and comfortable supporting the data needs of multiple teams, systems, and products
Experienced in designing data architecture for shared services, scalability, and performance
Experienced in designing data services including API, meta data, and data catalogue
Experienced in data governance process to ingest (batch, stream), curate, and share data with upstream and downstream data users
Ability to build and optimize data sets, ‘big data' data pipelines and architectures
Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions
Excellent analytic skills associated with working on unstructured datasets
Ability to build processes that support data transformation, workload management, data structures, dependency and metadata
Demonstrated understanding and experience using software and tools including big data tools like Spark and Hadoop; relational databases including MySQL and Postgres; workflow management and pipeline tools such as Apache Airflow, and AWS Step Function; AWS cloud services including Redshift, RDS, EMR and EC2; stream-processing systems like Spark-Streaming and Storm; and object function/object-oriented scripting languages including Scala, Java and Python
Flexible and willing to accept a change in priorities as necessary
Ability to work in a fast-paced, team-oriented environment
Experience with Agile methodology, using test-driven development
Experience with GitHub and Atlassian Jira/Confluence
Excellent command of written and spoken English
Preferred
Federal Government contracting work experience
Databricks Certification, Google's Certified Professional-Data-Engineer certification, IBM Certified Data Engineer – Big Data certification, CCP Data Engineer for Cloudera
Centers for Medicare and Medicaid Services (CMS) or Health Care Industry experience
Experience with healthcare quality data including Medicaid and CHIP provider data, beneficiary data, claims data, and quality measure data
Benefits
Highly competitive salaries
Full healthcare benefits
Company
eSimplicity
eSimplicity delivers game-changing digital services, healthcare IT and telecommunications solutions.
Funding
Current Stage
Growth StageRecent News
Synergy ECP, LLC
2025-10-09
Company data provided by crunchbase