HURO AI, Inc. ยท 9 hours ago
Data Engineer Intern (Native German Speaker)
Maximize your interview chances
Computer Software
Insider Connection @HURO AI, Inc.
Get 3x more responses when you reach out via email instead of LinkedIn.
Responsibilities
Build scalable, robust data pipelines to support large-scale data processing and analytics.
Collaborate with the team to integrate data from diverse sources, ensuring consistency and security.
Support the creation and optimization of efficient data models and schemas.
Help manage data storage systems, including relational and NoSQL databases.
Gain hands-on experience with cloud-based data services on platforms like AWS, Azure, or Google Cloud.
Integrate big data tools like Apache Spark, Hadoop, and Kafka into our infrastructure.
Monitor data pipeline performance and troubleshoot issues to ensure system reliability.
Assist in data validation and quality assurance processes to maintain data accuracy.
Document workflows, processes, and best practices to ensure team knowledge sharing.
Collaborate closely with cross-functional teams, including AI and software engineers, to deliver impactful solutions.
Qualification
Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.
Required
MUST BE FLUENT IN GERMAN- READING, WRITING AND SPEAKING
Data Engineering, Data Modeling, and ETL skills.
Data Warehousing and Data Analytics skills.
Experience with data manipulation and transformation.
Knowledge of data visualization tools (e.g., Tableau, Power BI, or similar platforms).
Proficiency in programming languages like Python, SQL, or Java.
Ability to work independently and collaboratively in a team environment.
Strong problem-solving and analytical skills.
Pursuing or recently completed a degree in Computer Science, Data Science, Data Engineering, or a related field.
Hands-on experience with data manipulation, transformation, and modeling techniques.
Familiarity with ETL processes and data integration solutions.
Knowledge of data warehousing and analytics tools.
Interest in cloud computing and big data tools like Apache Spark or Hadoop.
Detail-oriented and proactive, with the ability to manage multiple tasks effectively.