Applicantz ยท 1 week ago
Senior Data Engineer
Wonder how qualified you are to the job?
Insider Connection @Applicantz
Responsibilities
Create and maintain optimal data pipeline architecture.
Assemble complex data sets that meet functional / non-functional requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Dbt and AWS 'big data' technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into employee experience, operational efficiency, and other key business performance metrics.
Work with stakeholders to assist with data-related technical issues and support associated data infrastructure needs.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Keep up to date with the latest and greatest in feature-sets and capabilities from public cloud providers (such as AWS and Azure) and find ways to apply them back to help the team.
Work with data scientists and analysts to strive for greater functionality in our data systems.
Lead a team of data engineers and provide technical guidance and mentorship.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
8+ years of experience in a Data Engineering role
Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
5+ years of hands-on experience in Snowflake
5+ years of working in dbt with knowledge on advanced dbt concepts like macros and Jinja templating
4+ years of work experience in B2B Marketing Domain
Advanced working SQL experience working with relational databases, query authoring (SQL) and working familiarity with a variety of databases
Experience with scripting languages such as Python
Experience with big data tools such as PySpark
Experience with AWS cloud services used often for data engineering including S3, EC2, Glue, Lambda, RDS, or Redshift
Experience working with APIs to pull and push data
Experience optimizing 'big data' data pipelines, architectures and data sets
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Able to read existing code to understand functionality and debug issues
Preferred
Familiarity with Agile and SCRUM methodologies is a plus
Experience working with PowerBI to develop dashboards is a plus
Analytical skills related to working with unstructured datasets
A successful history of processing value from large, disconnected datasets
Experience working with agile, globally distributed teams
Company
Applicantz
We help teams not stress over resignations.
Funding
Current Stage
Growth StageCompany data provided by crunchbase