Senior SQL and ETL Engineer jobs in United States
info-icon
This job has closed.
company-logo

Alpha Consulting Corp. · 3 days ago

Senior SQL and ETL Engineer

Alpha Consulting Corp. is seeking a Senior SQL and ETL Engineer responsible for leading and developing complex IT applications. The role involves designing, implementing, and managing data integration solutions, optimizing ETL processes, and ensuring data quality and compliance.

ConsultingPharmaceuticalStaffing Agency
check
Growth Opportunities
check
H1B Sponsor Likelynote

Responsibilities

A Senior Programmer is responsible for leading and/or working on the most complex IT applications design, documentation, development, modification, testing, installation, implementation and support of new or existing applications software
This classification may also plan, install, configure, test, implement and manage a systems environment in support of an organization’s IT architecture and business needs
The Senior Programmer, in development of applications software, is responsible for analyzing and refining systems requirements; translating systems requirements into applications prototypes; planning and designing systems architecture; writing, debugging and maintaining code; determining and designing applications architecture; determining output media/formats; designing user interfaces; working with customers to test applications; assuring software and systems quality and functionality; integrating hardware and software components; writing and maintaining program documentation; evaluating new applications software technologies; and/or ensuring the rigorous application of information security/information assurance policies, principles and practices to the delivery of application software services
The Senior Programmer, in development of operating systems, is responsible for analyzing systems requirements in response to business requirements, risks and costs; evaluating, selecting, verifying and validating the systems software environment; evaluating, selecting and installing compilers, assemblers and utilities; integrating hardware and software components within the systems environment; monitoring and fine-tuning performance of the systems environment; evaluating new systems engineering technologies and their effect on the operating environment; and/or ensuring that information security/information assurance policies, principles and practices are an integral element of the operating environment
The Senior Programmer will possess knowledge and experience in applications software development principles and methods sufficient to participate in the design, development, testing and implementation of new or modified applications software; operating systems installation and configuration procedures; organization’s operational environment; software design principles, methods and approaches; principles, methods and procedures for designing, developing, optimizing and integrating new and/or reusable systems components; pertinent government regulations; infrastructure requirements, such as bandwidth and server sizing; database management principles and methodologies, including data structures, data modeling, data warehousing and transaction processing; functionality and operability of the current operating environment; systems engineering concepts and factors such as structured design, supportability, survivability, reliability, scalability and maintainability; optimization concepts and methods; establish and maintain cooperative working relationships with those contacted in the course of the work; and speak and write effectively and prepare effective reports
Strong expertise in SQL, PL/SQL, and T-SQL with advanced query tuning, stored procedure optimization, and relational data modeling across Oracle, SQL Server, PostgreSQL, and MySQL
Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows
Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimize models for analytics and large-scale reporting
Strong understanding of data integration, data validation, cleansing, profiling, and end-to-end data quality processes to ensure accuracy and consistency across systems
Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud-based ingestion frameworks
Experience applying best practices for scalable, maintainable ETL engineering, including metadata-driven design and automation
Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi-structured data, and transforming large datasets
Experience handling structured and semi-structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion
Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure
Expertise in optimizing ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimization
Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments
Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures

Qualification

SQLETL/ELT toolsData warehouse designData integrationPythonPL/SQLT-SQLAzure Synapse AnalyticsAzure Data FactorySSISData quality processesData security practicesCollaborationCommunication

Required

Strong expertise in SQL, PL/SQL, and T-SQL with advanced query tuning, stored procedure optimization, and relational data modeling across Oracle, SQL Server, PostgreSQL, and MySQL
Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows
Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimize models for analytics and large-scale reporting
Strong understanding of data integration, data validation, cleansing, profiling, and end-to-end data quality processes to ensure accuracy and consistency across systems
Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud-based ingestion frameworks
Experience applying best practices for scalable, maintainable ETL engineering, including metadata-driven design and automation
Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi-structured data, and transforming large datasets
Experience handling structured and semi-structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion
Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure
Expertise in optimizing ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimization
Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments
Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures
This classification must have a minimum of seven (7) years of experience in electronic data processing systems study, design, and programming. At least four (4) years of that experience must have been in a lead capacity
3 years of experience in the past 4 years developing and optimizing SQL, PL/SQL, and T-SQL logic, including stored procedures, functions, performance tuning, and advanced relational modeling across Oracle and SQL Server
3 years of experience in the past 4 years working with mainframe including data extraction, mapping, and conversion into modern ETL/ELT pipelines
3 years of experience in the past 4 years designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows
3 years of experience in the past 4 years building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms
3 years of experience in the past 4 years working with big data technologies such as Apache Spark, PySpark, or Hadoop for large-scale data transformation
3 years of experience in the past 4 years integrating structured and semi-structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark
3 years of experience in the past 4 years developing analytics-ready datasets and supporting business intelligence platforms such as Power BI or Cognos, including writing optimized SQL for reporting
3 years of experience in the past 4 years performing data cleansing, validation, profiling, and data quality assurance for regulated or audit-sensitive environments
3 years of experience supporting production ETL operations, troubleshooting pipeline failures, conducting root cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads
This classification requires the possession of a bachelor's degree in an IT-related or Engineering field. Additional qualifying experience may be substituted for the required education on a year-for-year basis

Company

Alpha Consulting Corp.

twittertwitter
company-logo
Alpha Consulting Corp. has been exceeding expectations in the IT, pharmaceutical, and clinical staffing business since 1994.

H1B Sponsorship

Alpha Consulting Corp. has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (2)
2024 (4)
2023 (3)
2021 (4)
2020 (6)

Funding

Current Stage
Growth Stage
Company data provided by crunchbase