Energy Job Search ยท 1 day ago
Data Engineer - Enterprise AI
Chevron is seeking a Data Engineer to join their Enterprise AI team, focusing on building intelligent data solutions for advanced analytics and AI-driven decision-making. The role involves designing and implementing scalable data and AI solutions, deploying automated data transformation pipelines, and collaborating with cross-disciplinary teams to ensure data quality and interoperability across multiple AI applications.
Staffing & Recruiting
Responsibilities
Architect and Optimize Data Pipelines: Design, develop, and maintain robust ETL/ELT pipelines leveraging Databricks (including Databricks Genie for AI-assisted development), Azure Data Factory, Azure Synapse, and Azure Fabric. Architect solutions with a holistic AI foundation, ensuring pipelines and frameworks are built to support agentic AI systems, machine learning, and generative AI at scale
Enable AI-Ready Data: Build modular, reusable data assets and products optimized for AI workloads, ensuring data quality, lineage, governance, and interoperability across multiple AI applications
Collaborate Across Disciplines: Partner with AI delivery teams, including software engineers, AI engineers, and applied scientists, to deliver AI-ready datasets and features that accelerate model development and deployment
Performance and Scalability: Optimize pipelines for big data processing using Spark, Delta Lake, and Databricks-native capabilities, ensuring scalability and reliability for enterprise-scale AI workloads
Cloud-Native Engineering: Implement best practices for CI/CD, infrastructure-as-code, and DevOps using Azure DevOps, Git, and Ansible, while integrating with Databricks workflows for seamless deployment and reuse
Innovation and Continuous Learning: Stay ahead of emerging technologies in data engineering, AI/ML, and cloud ecosystems, leveraging AI tools like Databricks Genie and Agent Bricks and emerging tools and services within Azure AI Foundry and Fabric to accelerate development and maintain cutting-edge, reusable solutions
Qualification
Required
Bachelor's degree in computer science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals
At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes in cloud-based data platforms (Azure preferred)
Strong hands-on experience with Databricks (Lakehouse, Delta Lake, Unity Catalog) and Microsoft Azure services (Azure Data Factory, Azure Synapse, Azure Blob Storage and Azure Data Lake Gen 2)
Strong understanding data modeling, data governance, and software engineering principles and how they apply to data engineering (e.g., CI/CD, version control, testing)
Strong problem-solving skills and attention to detail
Excellent communication and collaboration skills
Preferred
Demonstrated learning agility in emerging data and AI tools and services
Experience integrating AI/ML pipelines or feature engineering workflows into data platforms
Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted
Experience building spark applications utilizing PySpark
Experience with file formats such as Parquet, Delta, Avro
Experience efficiently querying API endpoints as a data source
Understanding of the Azure environment and related services such as subscriptions, resource groups, etc
Understanding of Git workflows in software development
Using Azure DevOps pipeline and repositories to deploy and maintain solutions
Understanding of Ansible and how to use it in Azure DevOps pipelines
Company
Energy Job Search
Join 3,000,000+ energy professionals who trust us to power their careers.
Funding
Current Stage
Growth StageCompany data provided by crunchbase