Principal Data Engineer - Remote US @ Seamless.AI | Jobright.ai
JOBSarrow
RecommendedLiked
0
Applied
0
External
0
Principal Data Engineer - Remote US jobs in United States
40 applicants
company-logo

Seamless.AI · 9 hours ago

Principal Data Engineer - Remote US

ftfMaximize your interview chances
Artificial Intelligence (AI)Information Technology
check
Actively Hiring
badNo H1Bnote

Insider Connection @Seamless.AI

Discover valuable connections within the company who might provide insights and potential referrals.
Get 3x more responses when you reach out via email instead of LinkedIn.

Responsibilities

Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
Implement data transformation logic using Python and other relevant programming languages and frameworks.
Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
Implement and maintain data governance practices to ensure compliance, data security, and privacy.
Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.

Qualification

Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.

PythonAWS GlueETL technologiesSparkSQLData modelingData warehousingData architectureLarge data setsMachine learning modelsData matchingDeduplicationData aggregationData governanceData securityPrivacy practicesCollaboration skills

Required

Expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies.
Proven track record in data acquisition and transformation.
Experience working with large data sets and applying methodologies for data matching and aggregation.
Strong organizational skills and the ability to work independently as a self-starter.
Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem.
Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies.
Implement data transformation logic using Python and other relevant programming languages and frameworks.
Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets.
Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality.
Implement and maintain data governance practices to ensure compliance, data security, and privacy.
Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing.
Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
Hands-on experience with AWS Glue or similar ETL tools and technologies.
Solid understanding of data modeling, data warehousing, and data architecture principles.
Expertise in working with large data sets, data lakes, and distributed computing frameworks.
Experience developing and training machine learning models.
Strong proficiency in SQL.
Familiarity with data matching, deduplication, and aggregation methodologies.
Experience with data governance, data security, and privacy practices.
Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.
Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience.
7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration.
Professional experience with Spark and AWS pipeline development required.

Company

Seamless.AI

company-logo
Seamless.ai provides a sales automation software intended to organize contacts and make them universally accessible and useful.

Funding

Current Stage
Growth Stage
Total Funding
$75.3M
2021-05-01Series A· $75M
2018-07-23Seed· undefined
2018-07-15Seed· undefined

Leadership Team

leader-logo
Brandon Bornancin
CEO and Founder
linkedin
leader-logo
Adam Buerger
VP of Sales
linkedin
Company data provided by crunchbase
logo

Orion

Your AI Copilot