VARITE INC ยท 1 day ago
Senior SQL and ETL Engineer
VARITE INC is an award-winning minority business enterprise providing global consulting & staffing services. They are seeking a qualified Senior SQL and ETL Engineer to participate in the design, development, testing, and implementation of applications software, with a focus on database management and ETL pipeline development.
Information Technology & Services
Responsibilities
Possess knowledge and experience in applications software development principles and methods sufficient to participate in the design, development, testing and implementation of new or modified applications software
Operating systems installation and configuration procedures
Organization's operational environment
Software design principles, methods and approaches
Principles, methods and procedures for designing, developing, optimizing and integrating new and/or reusable systems components
Pertinent government regulations
Infrastructure requirements, such as bandwidth and server sizing
Database management principles and methodologies, including data structures, data modeling, data warehousing and transaction processing
Functionality and operability of the current operating environment
Systems engineering concepts and factors such as structured design, supportability, survivability, reliability, scalability and maintainability
Optimization concepts and methods
Establish and maintain cooperative working relationships with those contacted in the course of the work
Speak and write effectively and prepare effective reports
Qualification
Required
Minimum of seven (7) years of experience in electronic data processing systems study, design, and programming
At least four (4) years of that experience must have been in a lead capacity
3 years of experience in the past 4 years developing and optimizing SQL, PL/SQL, and T-SQL logic, including stored procedures, functions, performance tuning, and advanced relational modeling across Oracle and SQL Server
3 years of experience in the past 4 years working with mainframe including data extraction, mapping, and conversion into modern ETL/ELT pipelines
3 years of experience in the past 4 years designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows
3 years of experience in the past 4 years building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms
3 years of experience in the past 4 years working with big data technologies such as Apache Spark, PySpark, or Hadoop for large-scale data transformation
3 years of experience in the past 4 years integrating structured and semi-structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark
3 years of experience in the past 4 years developing analytics-ready datasets and supporting business intelligence platforms such as Power BI or Cognos, including writing optimized SQL for reporting
3 years of experience in the past 4 years performing data cleansing, validation, profiling, and data quality assurance for regulated or audit-sensitive environments
3 years of experience supporting production ETL operations, troubleshooting pipeline failures, conducting root cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads
Possession of a bachelor's degree in an IT-related or Engineering field
Strong expertise in SQL, PL/SQL, and T-SQL with advanced query tuning, stored procedure optimization, and relational data modeling across Oracle, SQL Server, PostgreSQL, and MySQL
Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows
Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimize models for analytics and large-scale reporting
Strong understanding of data integration, data validation, cleansing, profiling, and end-to-end data quality processes to ensure accuracy and consistency across systems
Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud-based ingestion frameworks
Experience applying best practices for scalable, maintainable ETL engineering, including metadata-driven design and automation
Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi-structured data, and transforming large datasets
Experience handling structured and semi-structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion
Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure
Expertise in optimizing ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimization
Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments
Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures