AgileEngine · 7 hours ago
Data Engineer (Middle/Senior) ID26045
Maximize your interview chances
Product DesignSoftware
Growth OpportunitiesNo H1BU.S. Citizen Only
Insider Connection @AgileEngine
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Work collaboratively with other engineers, architects, data scientists, analytics teams, and business product owners in an agile environment;
Architect, build, and support the operation of Cloud and On-Premises enterprise data infrastructure and tools;
Design and build robust, reusable, and scalable data driven solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data;
Build data APIs and data delivery services to support critical operational processes, analytical models and machine learning applications;
Assist in selection and integration of data related tools, frameworks, and applications required to expand platform capabilities;
Understand and implement best practices in management of enterprise data, including master data, reference data, metadata, data quality and lineage.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
4+ years of experience with Python or Java (Python preferably, or willingness to work with this language)
4+ years of experience in building data lake, cloud data platform leveraging cloud (GCP/ AWS) cloud native architecture, ETL/ELT, and data integration
Three years of development experience with cloud services (AWS, GCP, AZURE) utilizing various support tools (e.g. GCS, Dataproc, Cloud Data flow, Airflow(Composer), Kafka, Cloud Pub/Sub)
Expertise in developing distributed data processing and Streaming frameworks and architectures (Apache Spark, Apache Beam, Apache Flink)
Experience with Snowflake is a must
In-depth knowledge of NoSQL database technologies (e.g. MongoDB, BigTable, DynamoDB)
Expertise in build and deployment tools – (Visual Studio, PyCharm, Git/Bitbucket/Bamboo, Maven, Jenkins, Nexus)
4+ years of experience and expertise in database design techniques and philosophies (e.g. RDBMS, Document, Star Schema, Kimball Model)
4 years of experience with integration and service frameworks (e.g API Gateways, Apache Camel, Swagger API, Zookeeper, Kafka, messaging tools, microservices)
Expertise with containerized Microservices and REST/GraphQL based API development
Experience leveraging continuous integration/development tools (e.g. Jenkins, Docker, Containers, OpenShift, Kubernetes, and container automation) in a Ci/CD pipeline
Advanced understanding of software development and research tools
Attention to detail and results oriented, with a strong customer focus
Ability to work as part of a team and independently
Analytical and problem-solving skills
Problem-solving and technical communication skills
Ability to prioritize workload to meet tight deadlines
Should have working experience with wealth asset management projects
Upper-intermediate English level
Preferred
Airflow
Big Query
Kafka
Benefits
Professional growth
Competitive compensation
A selection of exciting projects
Flextime