AWS + Databricks Developer @ ApTask | Jobright.ai
JOBSarrow
RecommendedLiked
0
Applied
0
AWS + Databricks Developer jobs in United States
Be an early applicantLess than 25 applicantsPosted by Agency
company-logo

ApTask ยท 3 days ago

AWS + Databricks Developer

Wonder how qualified you are to the job?

ftfMaximize your interview chances
Human ResourcesInformation Technology
check
Growth Opportunities

Insider Connection @ApTask

Discover valuable connections within the company who might provide insights and potential referrals, giving your job application an inside edge.

Responsibilities

Strong proficiency in Python.
Building S3 file systems via code pipelines.
Creating and using IAM roles.
Building and managing Key Management Services.
Creating and using AWS transfer Family and Storage Gateway mechanism.
Building and using CloudFormation templates.
Standing up and maintaining EC2 instances and Lambda services.
Standing up Kubernetes clusters and maintenance.
Experience with data visualization tools (Power BI, Tableau).
Extensive experience in Databricks technology.
Extensive experience in data mapping, data architecture, and data modeling on Databricks.
Design, implementation, and maintenance of data pipelines using Python, pyspark on Databricks.
Strong proficiency in Python, PySpark, able to write and execute complex queries.
Extensive experience in Databricks Data engineering (Job Runs, Data Ingestion and Delta Live Tables).
Setting up and maintaining SQL Warehouses, SQL Editor, and alerts.
Setting up and maintaining workspaces, catalogs, workflows, and Compute.
Experienced with All-purpose compute, Job computes, SQL Warehouse, Vector search, Pools, and Policies.
Setting up and maintaining the Unity catalog.
Creating and maintaining Databricks schema, objects, Notebooks, scheduler, and Job clusters.
Building Notebooks with complex code structures.
Building and maintaining the Autoloader process.
Usage of Table ACL & Row and Column Level Security With Unity Catalog.
Data Ingestion, Storage, Harmonization, and curation.
Performance and tuning to ensure optimal job performance.
Data ingestion using various methods API, Direct database reads, AWS Cross account data copy, third party vendor data using FTP process.
Debugging failed jobs.

Qualification

Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.

PythonS3 file systemsIAM rolesKey Management ServicesAWS transfer FamilyStorage Gateway mechanismCloudFormation templatesEC2 instancesLambda servicesKubernetes clustersGITCI/CD PipelinesDatabricks technologyData visualization toolsPower BITableauData mappingData architectureData modelingPySparkData pipelinesSQL WarehousesSQL EditorDatabricks Data engineeringJob RunsData IngestionDelta Live TablesAll-purpose computeJob computesVector search

Required

Strong proficiency in Python.
Experienced in Building S3 file systems via code pipelines.
Experienced in Creating and using IAM roles.
Experienced in building and managing Key Management Services.
Experienced in creating and using AWS transfer Family and Storage Gateway mechanism.
Experienced in building and using CloudFormation templates.
Experienced in standing up and maintaining EC2 instances and Lambda services.
Experienced in standing up Kubernetes clusters and maintenance.
Previous experience with data visualization tools (Power BI, Tableau).
Strong experience with GIT, and CI/CD Pipelines.
Extensive experience in Databricks technology.
Extensive experience in data mapping, data architecture, and data modeling on Databricks.
Extensive experience in Design, implementation, and maintenance of data pipelines using Python, pyspark on Databricks.
Strong proficiency in Python, and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
Extensive experience in Databricks Data engineering (Job Runs, Data Ingestion and Delta Live Tables).
Strong experience in setting up and maintaining SQL Warehouses, SQL Editor and alerts.
Strong experience in setting up and maintaining workspaces, catalogs, workflows, and Compute.
Experienced with All-purpose compute, Job computes, SQL Warehouse, Vector search, Pools, and Policies.
Experienced in setting up and maintaining the Unity catalog.
Experienced in creating and maintaining Databricks schema, objects, Notebooks, scheduler, and Job clusters.
Strong experience in building Notebooks with complex code structures.
Strong experience in building and maintaining the Autoloader process.
Strong experience in the usage of Table ACL & Row and Column Level Security With Unity Catalog.
Strong experience in Data Ingestion, Storage, Harmonization, and curation.
Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
Experienced in data ingestion using various methods API, Direct database reads, AWS Cross account data copy, third party vendor data using FTP process etc.,
Proven experience in debugging failed jobs.

Company

ApTask

twittertwittertwitter
company-logo
ApTask is a staffing and recruiting company offering staffing, project, and workforce solutions.

Funding

Current Stage
Growth Stage

Leadership Team

N
Ned Sands
Vice President of Sales
linkedin
leader-logo
Taj Haslani
Founder
linkedin
Company data provided by crunchbase
logo

Orion

Your AI Copilot