Pure Talent Consulting · 1 day ago
AWS Data Engineer
Pure Talent Consulting is seeking an AWS Data Engineer responsible for designing, developing, and implementing a reporting platform and architecture. The role focuses on Business Intelligence and Data Analytics, ensuring timely data solutions for stakeholders while collaborating with various teams.
Responsibilities
Can design, extend, and document a modern ELT lakehouse for all company data
S3 parquet medallion layers, Glue Catalog, clean architecture/docs, flows, modeling artifacts
AWS-first engineering — deep platform skills, not AWS console work
Glue (PySpark), DMS CDC, AppFlow, Step Functions, Lambda, IAM, CDK
Strong with NAWS CI/CD IaC — can build, extend, and debug pipelines and CDK Stacks in single or multi account deployments
CodeCommit, CodePipeline, CodeBuild etc
Owns source system changes and cross-system integration end-to-end
Schema alignment, SCD2, filling gaps between systems with no shared keys, building composite business keys, entity resolution, and shaping normalized/dimensional Gold models (dim/fact) and denormalized reporting layer (rpt)
A developer with a proper dev workflow
WSL/Linux, Docker Glue containers, pytest/Spark testing, mypy, ruff, Git flow, CI/CD pipelines
Understands data governance & security patterns and can help harden the platform
Lake Formation permissions, table/column-level controls, catalog organization, cross-account access patterns
Can design normalized → denormalized layers in Gold for BI, apps, and reporting
Entity modeling, dimensional design, surrogate keys, star schemas, wide reporting views
Forward-looking engineer who can grow the platform into streaming + AI
Kinesis/MSK ingestion, Delta/Iceberg, ML-driven enrichment, scalable gold models
Data Warehouse/Data Lake Architecture and Development
Data Modeling & Architecture
Design and maintain the architecture in reporting platforms, including the QuickSight
Synthesize many disparate data sources into one centralized data platform
Lead efforts to design, develop and implement database enhancement to improve efficiency and streamline the use for analytics, business analysis
Scope out and develop data enhancement plans for company initiatives
Implement and enforce data validation procedures
Make suggestions for improvements to the data team and team processes
Produce complete and accurate reports to business entities and users
Other duties as assigned
Qualification
Required
Bachelor's degree required in Data Science, Software Engineering, Information Technology, or a related field
Minimum 5 years hands experience with AWS Data Tools
2+ years of experience as a technical lead on data and analytics projects
5+ years of experience architecting and maintaining enterprise data platforms
Expert level in AWS data tools (QuickSight)
Expert level user of Microsoft SQL Server tools and SQL
Expert level working knowledge of The Python language
Experience validating databases for accuracy
Experience working with API and event-driven architectures
Experience working with ETL/ELT Pipelines
Knowledge of how data entities and elements should be structured to assure accuracy, performance, understanding, operational, analytical, reporting, and data science efficiencies
Applied knowledge of cloud computing and RPA programming/machine learning
Applied knowledge of data modeling principles (e.g., dimensional modeling and star schemas)
Must be a self-directed professional with the ability to work effectively in a fast-paced demanding environment, handle multi-tasks, problem solve and effectively follow-through
Work collaboratively with internal partners/departments
Preferred
Hands-on experience with modern data lake/lakehouse patterns, S3-based pipelines, Glue, Step Functions, Athena, or event-driven orchestration
Real-world implementation experience to be able to articulate tradeoffs between architectural approaches like snapshotting vs diffing vs time-travel
High level of comfort and willingness to be working at the infrastructure or developer tooling level, writing Python-based ELT logic, using SDKs like Boto3 or CDK, or contributing to CI/CD automation and platform tooling
Able to contribute to foundational platform design or reusable framework development
Comfortable working in ambiguity-heavy environment that requires creating new patterns and processes
Ability to collaborate with other data engineers on SQL transformation logic and business rules in parallel with establishing core pipeline and platform
Someone who can help drive the modernization of our platform, through IaC, scalability, and NAWS automation
Demonstrated taking initiative on own (self-directed) in past roles (e.g., manually deploying CloudFormation templates, standing up some automation)