Was Sie mitbringen sollten
We are looking for a skilled Big Data Engineer to join our client’s data team in Zurich. In this role, you’ll design and optimize data pipelines that process large-scale datasets to support analytics, machine learning, and business intelligence. If you enjoy working with modern data technologies and transforming raw data into usable insights, this could be the right role for you.
What You’ll Do
-
Develop and maintain scalable data processing pipelines using big data frameworks
-
Work with cloud platforms to manage data storage, processing, and access
-
Collaborate with data scientists and analysts to ensure data availability and quality
-
Optimize data workflows for performance, cost-efficiency, and reliability
-
Implement monitoring and alerting for data infrastructure
What You Bring
-
A degree in Computer Science, Data Engineering, or a related field
-
3–7 years of experience in big data engineering or data infrastructure
-
Strong experience with technologies like Spark, Hadoop, Kafka, or Airflow
-
Proficiency in SQL and Python (or Scala/Java)
-
Fluency in German is mandatory; English proficiency is also required
What We Offer
-
A role working with large-scale data in a cloud-native environment
-
Competitive compensation and development opportunities
-
Access to the latest data tools and technologies
-
A collaborative team culture focused on innovation and ownership