Comcast · 6 hours ago
Data Engineer - Python, PySpark, AWS
Comcast is a Fortune 30 global media and technology company that creates innovative products and solutions for millions of customers. The Data Engineer role focuses on developing data structures and pipelines, ensuring data quality, and optimizing data access and consumption methods.
InternetTelecommunicationsTVVideoWeb Hosting
Responsibilities
Develops data structures and pipelines aligned to established standards and guidelines
Ensures data quality during ingest, processing, and final load to target tables
Creates standard ingestion frameworks for structured and unstructured data
Checks and reports on the quality of data being processed
Creates standard methods for end users and downstream applications to consume data, including:
Database views
Extracts
Application Programming Interfaces (APIs)
Develops and maintains information systems (e.g., data warehouses, data lakes), including data access APIs
Implements solutions via data architecture, data engineering, or data manipulation on:
On-prem platforms (e.g., Kubernetes, Teradata)
Cloud platforms (e.g., Databricks)
Determines appropriate storage platforms across on-prem (minIO, Teradata) and cloud (AWS S3, Redshift) based on privacy, access, and sensitivity requirements
Understands data lineage from source to final semantic layer, including transformation rules
Enables faster troubleshooting and impact analysis during changes
Collaborates with technology and platform management partners to optimize data sourcing and processing rules
Establishes design standards and assurance processes for software, systems, and applications development
Reviews business and product requirements for data operations
Suggests changes and upgrades to systems and storage to accommodate ongoing needs
Develops strategies for data acquisition, archive recovery, and database implementation
Manages data migrations/conversions and troubleshooting of data processing issues
Applies data sensitivity and customer data privacy rules and regulations consistently in all Information Lifecycle Management activities
Monitors system notifications and logs to ensure database and application quality standards
Solves abstract problems by reusing data files and flags
Resolves critical issues and shares knowledge such as trends, aggregates, and volume metrics regarding specific data sources
Qualification
Required
Bachelor's Degree
5-7 Years of Relevant Work Experience
Python
AWS (including S3, Redshift)
PySpark
Databricks
Develops data structures and pipelines aligned to established standards and guidelines
Ensures data quality during ingest, processing, and final load to target tables
Creates standard ingestion frameworks for structured and unstructured data
Checks and reports on the quality of data being processed
Creates standard methods for end users and downstream applications to consume data, including Database views, Extracts, Application Programming Interfaces (APIs)
Develops and maintains information systems (e.g., data warehouses, data lakes), including data access APIs
Implements solutions via data architecture, data engineering, or data manipulation on On-prem platforms (e.g., Kubernetes, Teradata) and Cloud platforms (e.g., Databricks)
Determines appropriate storage platforms across on-prem (minIO, Teradata) and cloud (AWS S3, Redshift) based on privacy, access, and sensitivity requirements
Understands data lineage from source to final semantic layer, including transformation rules
Enables faster troubleshooting and impact analysis during changes
Collaborates with technology and platform management partners to optimize data sourcing and processing rules
Establishes design standards and assurance processes for software, systems, and applications development
Reviews business and product requirements for data operations
Suggests changes and upgrades to systems and storage to accommodate ongoing needs
Develops strategies for data acquisition, archive recovery, and database implementation
Manages data migrations/conversions and troubleshooting of data processing issues
Applies data sensitivity and customer data privacy rules and regulations consistently in all Information Lifecycle Management activities
Monitors system notifications and logs to ensure database and application quality standards
Solves abstract problems by reusing data files and flags
Resolves critical issues and shares knowledge such as trends, aggregates, and volume metrics regarding specific data sources
Preferred
Big Data Architecture
Apache Spark
Data Modeling & Pipeline Design
Kafka / Kinesis (Streaming)
Apache AirFlow
GitHub, CI/CD (Concourse preferred)
MinIO
Tableau
Performance Tuning
Jira (ticketing)
Shell Commands
Data Governance & Best Practices
Benefits
Best-in-class Benefits
Company
Comcast
Comcast is a media and technology company that provides broadband internet, mobile services, and entertainment platforms. It is a sub-organization of SkyShowtime.
H1B Sponsorship
Comcast has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (705)
2024 (561)
2023 (624)
2022 (750)
2021 (588)
2020 (583)
Funding
Current Stage
Public CompanyTotal Funding
$4.92BKey Investors
California Public Utilities CommissionMassachussetts Broadband InstituteMaine Connectivity Authority
2025-11-13Grant· $3.2M
2024-07-02Grant· $2.69M
2023-04-24Grant· $0.28M
Leadership Team
Recent News
Campaign UK More Latest RSS Feed
2026-01-22
2026-01-20
lightreading
2026-01-17
Company data provided by crunchbase