PRIMENOTCH · 5 hours ago
Data Engineer
Maximize your interview chances
Insider Connection @PRIMENOTCH
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Design and implement robust, scalable data pipelines to collect, process, and store large volumes of data from various sources.
Build and maintain ETL (Extract, Transform, Load) processes to integrate data from multiple internal and external sources.
Utilize cloud services (AWS, Google Cloud, Azure) for data storage, computation, and orchestration.
Design efficient data models for analytical purposes and work with data warehouses (e.g., Redshift, BigQuery, Snowflake).
Optimize data pipelines and query performance, ensuring that systems are cost-effective, scalable, and reliable.
Work closely with data scientists, analysts, and engineers to identify data needs and ensure data accessibility and quality.
Ensure high standards for data quality, consistency, and security across all data systems and processes.
Automate repetitive tasks and set up monitoring for data pipelines to detect and resolve issues proactively.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
5+ years of experience in data engineering or related roles, with a strong understanding of data systems and architectures.
Proficiency in SQL and experience with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB).
Strong experience with ETL frameworks (e.g., Apache Airflow, Talend, DBT).
Hands-on experience with cloud platforms (AWS, Google Cloud, Azure), particularly with services such as S3, Lambda, Redshift, BigQuery, and Databricks.
Expertise in programming languages such as Python, Java, or Scala for data engineering tasks.
Experience with data warehousing and distributed data processing frameworks (e.g., Apache Spark, Hadoop).
Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.
Knowledge of data security and best practices for securing sensitive data.
Strong problem-solving skills and attention to detail.
Ability to work independently in a remote environment while collaborating with cross-functional teams.
Preferred
Experience with streaming data platforms (e.g., Apache Kafka, Kinesis).
Familiarity with machine learning frameworks and the ability to support data scientists with data engineering needs.
Experience with DevOps practices and tools such as Jenkins, Terraform, or Ansible.
A bachelor’s or master’s degree in Computer Science, Engineering, or a related field.
Company
PRIMENOTCH
Primenotch Solutions is a dynamic IT consulting firm dedicated to providing innovative solutions and cutting-edge technologies to businesses worldwide.
Funding
Current Stage
Early StageCompany data provided by crunchbase