Benchmark Analytics · 2 days ago
Senior Data Engineer
Wonder how qualified you are to the job?
AnalyticsInformation Services
Insider Connection @Benchmark Analytics
Responsibilities
Designing, developing, and maintaining complex data pipelines, ETL processes, and data integration solutions
Leading development discussions and designing patterns considering different languages, tools and frameworks
Playing a lead role in team workshops, refinement sessions, and development paths for data engineering
Assessing and analyzing current/legacy data processes, identifying inefficiencies, and suggesting improvements
Collaborating with data scientists and analysts to understand data requirements, and ensure the availability of clean, accurate, and reliable data for ML and reporting
Collaborating with team members across the organization to improve product data quality, data pipeline efficiency and data platform performance/monitoring
Supporting documentation efforts as necessary to prepare the team and company for growth/scale
Playing a supporting role in educating the data engineering team members on best practices, relevant knowledge, and specific skills
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
Bachelor’s degree in STEM field or equivalent (e.g. Computer science, Engineering, etc.)
Experience in data engineering, data analysis and data integration: Building data pipelines in ETL platforms (e.g. CloverDX or equivalent)
Applying data extraction, transformation and loading techniques in order to connect large data sets from a variety of sources
Evaluating legacy systems for opportunities to migrate to modern data processing frameworks
Ability to translate operational SQL scripts and migrate to an ETL layer
Experience troubleshooting problems throughout multiple data systems
Experience writing queries, analyzing data, and data-centric problem solving
Strong technical aptitude and technical acquisition skills (ability to work with and develop skills in technical products)
Has a strong sense of process (ability to understand how steps relate to each other to achieve end results)
Strong analytic skills related to working with unstructured datasets
Capacity to translate business requirements into technical solutions
Knowledge of DevOps practices (e.g. CI/CD pipelines, infrastructure as code)
Has the ability to successfully manage multiple concurrent tasks/projects and meet deadlines
Works and communicates well with others: Has empathy for colleagues and customers
Able to receive and respond positively to feedback
Able to work independently with reasonable guidance from management (except in complex and non-routine situations)
Able to effectively communicate across multiple levels (team members, managers) in a fast-paced environment
5+ years experience in architecting, designing, and developing scalable data solutions
5+ years experience in building and maintaining ETL/ELT pipelines (batching and/or streaming) that are optimized and can handle various sources structured or unstructured (CloverDX or equivalent)
3+ years experience in Java, Python, Unix Shell scripting, data driven job schedulers
2+ years experience in cloud-based platforms (AWS) and containerization technologies (Docker, Kubernetes)
Proficiency in various data modeling techniques, such as ER, Hierarchical, Relational, or NoSQL modeling and model governance
Excellent design and development experience with SQL and NoSQL database, OLTP and OLAP databases
Technologies: SQL, Python, Java, AWS products, CloverDX, Postgres, Spark/Hive, GitLab, Git, Docker, Kubernetes, Django
Benefits
Unlimited paid time off (PTO)
Medical, dental, and vision plan offerings along with 401(k)
Employer-paid Short-Term Disability, Long-Term Disability, and Life Insurance
Other Voluntary Benefits include additional Life Insurance, Spouse Life Insurance, and Accident Insurance