GeorgiaTEK Systems Inc. · 6 hours ago
Expert Microsoft Azure/Fabric Data Engineer
Maximize your interview chances
Insider Connection @GeorgiaTEK Systems Inc.
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Designing, building, and maintaining efficient, reusable, and reliable architecture and code.
Ensure high-scale web applications and services' best possible performance and quality.
Participate in architecture and system design discussions.
Independently perform hands-on development and unit testing of the applications.
Collaborate with the development team and build individual components into complex enterprise web systems.
Work in a team environment with product, frontend design, production operation, QE/QA, and cross-functional teams to deliver a project throughout the whole software development cycle.
Responsible for identifying and resolving any performance issues.
Keep up to date with new technology developments and implementations.
Participate in code reviews to ensure standards and best practices are met.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
Bachelor’s degree in computer science, Software Engineering, MIS, or an equivalent combination of education and experience.
Experience implementing and supporting data lakes, data warehouses, and data applications on AWS or Microsoft Azure for large enterprises (Fabric is a plus)
Programming experience with Python, Spark, and SQL.
Experience with Microsoft Azure services, specifically Data Factory, ADLS Gen2, Synapse Analytics, and Synapse Database, KQL.
Experience in system analysis, design, development, and implementation of data ingestion pipelines in AWS or Azure.
Knowledge of ETL/ELT processes.
Experience with end-to-end data solutions (ingest, storage, integration, processing, access) on Azure.
Architect and implement CI/CD strategies for EDP.
Implement high-velocity streaming solutions.
Develop data-based APIs
Implement POCs on any new technology or tools implemented on EDP and onboard for real use cases.
5+ years of experience as a Data Engineer/Data Lake.
Experience developing business applications using NoSQL/SQL databases.
Experience working with Object stores and Structured/semi-structured/unstructured data is a must.
Develop CI/CD pipeline using Terraform and GitHub.
Experience with Microsoft Azure services – Data Factory, ADLS Gen2, Synapse Analytics, and Synapse Database.
Preferred
Solid experience implementing solutions on Azure-based data lakes
Migrate data from traditional relational database systems, on-prem to Azure technologies (preferably in One lake using Fabric technologies)
Any Cloud Certification preferred (Azure / AWS preferred)
Microsoft Fabric is a plus