Principal Platform Engineer – Data Ops Engineer jobs in United States
cer-icon
Apply on Employer Site
company-logo

DIRECTV · 14 hours ago

Principal Platform Engineer – Data Ops Engineer

DIRECTV is looking for a talented and motivated Data Ops Engineer to lead the automation, orchestration, and optimization of cloud-based data workflows. The role involves collaborating with various teams to design and implement data deployment strategies that enhance efficiency and reliability.

Digital EntertainmentFilmMedia and EntertainmentSportsTVTV Production
check
H1B Sponsor Likelynote

Responsibilities

Define the long-term orchestration strategy and architectural standards for workflow management across Snowflake, Databricks, and AWS
Lead the design, implementation, and optimization of complex workflows using Apache Airflow and related tools
Mentor teams in best practices for DAG design, error handling, and resilience patterns
Champion cross-platform orchestration that supports data mesh and modern data architecture principles
Architect and guide the development of reusable automation frameworks in Python, Spark, and Shell that streamline data workflows and platform operations
Lead automation initiatives across data platform teams, setting coding and modularization standards
Evaluate and introduce emerging technologies and scripting tools to accelerate automation and reduce toil
Define and maintain enterprise-wide CI/CD standards for data pipelines and platform deployments using Jenkins, GitLab, and AWS CodePipeline
Drive adoption of Infrastructure as Code (IaC) and GitOps practices to enable scalable and consistent environment provisioning
Provide technical leadership for DevOps integration across Data, Security, and Cloud Engineering teams
Lead performance audits and capacity planning efforts across Snowflake, Databricks, and orchestrated workflows
Build frameworks for proactive monitoring, benchmarking, and optimization using Datadog, AWS CloudWatch, and JMeter
Partner with platform teams to implement self-healing systems and auto-scaling capabilities
Oversee complex incident resolution, lead post-mortems, and implement systemic preventive measures
Develop standardized runbooks, incident response frameworks, and training programs to elevate Tier 2/3 capabilities
Act as a liaison between engineering leadership, security, and business teams to drive platform roadmaps and risk mitigation

Qualification

Apache AirflowSnowflakeDatabricksAWSCI/CDPythonPySparkShell scriptingJenkinsGitLabPerformance monitoringAutomation frameworksIncident managementAgile collaborationAnalytical skillsTroubleshooting

Required

4 – 5 years required, 10+ years preferred of overall software engineering experience, including 10+ years preferred as a Big Data Architect, focused on end-to-end data infrastructure design using Spark, PySpark, Kafka, Databricks, and Snowflake
4 – 5 years required, 8+ years preferred of hands-on programming experience with Python, PySpark, JavaScript, and Shell scripting, with demonstrated expertise in building reusable and configuration-driven frameworks for Databricks
5+ years of experience designing and implementing configuration-driven frameworks in PySpark on Databricks, enabling scalable and metadata-driven data pipeline orchestration
4 – 5 years required, 8+ years preferred of experience in CI/CD pipeline development and automation using GitLab, Jenkins, and Databricks REST APIs, including infrastructure provisioning and deployment at scale
4 – 5 years required, 8+ years preferred of deep expertise in Snowflake, Databricks, and AWS, including migration, optimization, and orchestration of data workflows, as well as advanced features such as masking, time travel, and Delta Lake
7+ years of experience in performance monitoring and observability using tools like SonarQube, JMeter, Splunk, Datadog, and AWS CloudWatch, with a focus on optimizing pipeline efficiency and reducing cost
7+ years of experience in Tier 2/3 support roles, specializing in root cause analysis, incident resolution, and the creation of troubleshooting runbooks and automation for operational resilience
4+ years of experience with dbt, including the conversion of traditional dimensional models to modular dbt models, integration with CI/CD, and application of testing and documentation best practices
Deep expertise in Apache Airflow and orchestration technologies, having led large-scale orchestration implementations across multi-cloud environments
Strong analytical and architectural skills to design, optimize, and troubleshoot complex data pipelines, with demonstrated success in delivering performance and cost improvements (e.g., $1M in annual savings through Spark SQL optimization)

Preferred

Databricks Certified Data Engineer Associate / Professional
SnowPro Advanced Architect Certification
AWS Certified DevOps Engineer – Professional
Apache Airflow Certification
ITIL 4 Managing Professional (for incident management expertise)
Certified ScrumMaster (CSM) (for agile collaboration skills)
Master's degree in Computer Science, Data Engineering is preferred

Company

DIRECTV

twittertwittertwitter
company-logo
DIRECTV is an American digital entertainment services provider delivering sports, news, movies, family and local programming channels.

H1B Sponsorship

DIRECTV has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (66)
2024 (52)
2023 (30)
2022 (82)
2021 (31)

Funding

Current Stage
Public Company
Total Funding
unknown
2024-09-30Acquired
2013-08-29IPO

Leadership Team

leader-logo
Drew Groner
Senior Vice President, Head of Sales & Marketing
linkedin
leader-logo
John Manganilla
Senior Director of Product Development, Video Content
linkedin

Recent News

Company data provided by crunchbase