Acunor ยท 2 days ago
Senior Data Engineer
Maximize your interview chances
Insider Connection @Acunor
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Design, implement, and manage scalable data pipelines and ETL processes using Python and PySpark.
Develop and maintain data models and architectures to support analytics and reporting requirements.
Optimize data processing workflows to ensure efficiency and scalability.
Manage and optimize Snowflake data warehouse solutions, ensuring high performance and cost-effectiveness.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Ensure data integrity, security, and compliance with industry standards.
Utilize Azure Kubernetes Service (AKS) for orchestrating containerized applications and workflows.
Implement and manage cloud-based data solutions and integrations within the Azure ecosystem.
Collaborate with DevOps teams to deploy and monitor data infrastructure in a cloud environment.
Mentor and provide technical guidance to junior data engineers and other team members.
Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data needs and deliver solutions.
Lead projects and initiatives, ensuring timely and successful delivery of data solutions.
Monitor and analyze the performance of data systems and workflows, identifying and resolving issues proactively.
Implement best practices for data processing and storage to enhance system performance and reliability.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
Design, implement, and manage scalable data pipelines and ETL processes using Python and PySpark.
Develop and maintain data models and architectures to support analytics and reporting requirements.
Optimize data processing workflows to ensure efficiency and scalability.
Manage and optimize Snowflake data warehouse solutions, ensuring high performance and cost-effectiveness.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Ensure data integrity, security, and compliance with industry standards.
Utilize Azure Kubernetes Service (AKS) for orchestrating containerized applications and workflows.
Implement and manage cloud-based data solutions and integrations within the Azure ecosystem.
Collaborate with DevOps teams to deploy and monitor data infrastructure in a cloud environment.
Mentor and provide technical guidance to junior data engineers and other team members.
Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data needs and deliver solutions.
Lead projects and initiatives, ensuring timely and successful delivery of data solutions.
Monitor and analyze the performance of data systems and workflows, identifying and resolving issues proactively.
Implement best practices for data processing and storage to enhance system performance and reliability.
Advanced proficiency in Python, with experience in developing data processing scripts and applications.
Expertise in SQL for querying and managing relational databases, with the ability to write complex queries and optimize performance.
Extensive experience with PySpark for distributed data processing and ETL tasks.
Proficiency in Snowflake, including data modeling, performance tuning, and query optimization.
Hands-on experience with Azure Kubernetes Service (AKS) for container orchestration and management.