SPADTEK SOLUTIONS · 4 days ago
Big Data Developer
Responsibilities
Design & develop new automation framework for ETL processing
Support existing framework and become technical point of contact for all related teams
Enhance existing ETL automation framework as per user requirements
Performance tuning of spark, snowflake ETL jobs
New technology POC and suitability analysis for Cloud migration
Process optimization with the help of automation and new utility development
Work in collaboration for any issues and new features
Support any batch issue
Support application team teams with any queries
Qualification
Required
8 – 10 years of experience
7+ years of Data engineering experience
Must be strong in UNIX Shell, Python scripting knowledge
Must be strong in Spark
Must have strong knowledge of SQL
Hands-on knowledge on how HDFS/Hive/Impala/Spark works
Strong in logical reasoning capabilities
Should have working knowledge of GitHub, DevOps, CICD/ Enterprise code management tools
Strong collaboration and communication skills
Must possess strong team-player skills and should have excellent written and verbal communication skills
Ability to create and maintain a positive environment of shared success
Ability to execute and prioritize a task and resolve issues without aid from direct manager or project sponsor
Preferred
Good to have working experience on snowflake & any data integration tool i.e. informatica cloud
Snowflake/Azure/AWS any cloud
IDMC/any ETL tool
Company
SPADTEK SOLUTIONS
At Spadtek, we believe sustainable growth happens when talent works in harmony with technology.
Funding
Current Stage
Growth StageCompany data provided by crunchbase