Data & Integration Engineer
Role details
Job location
Tech stack
Job description
We are seeking a Data and Integration Engineer to build and maintain scalable data and system architectures supporting scientific platforms. The role focuses on developing cloud-based integration and processing solutions to deliver high-quality data for analytics, AI and machine learning. You will work closely with researchers and business stakeholders to translate requirements into robust, enterprise data solutions., * Develop, evolve and adapt system architecture to enhance platforms supporting the development of scientific tools and models (e.g. data lakes, data warehouses, integration frameworks, distributed computing, and OpenDevStack and OpenShift), with a strong focus on data quality, security and governance.
- Implement and optimize data integration and processing solutions using modern technologies (e.g. cloud services, ETL/ELT frameworks, SQL, and big data tools) to ensure performance and scalability.
- Collaborate closely with researchers, data scientists, and business stakeholders to understand requirements and translate them into robust, cloud-based data solutions.
- Design, build, and operate scalable integration solutions to enable reliable, high-quality data delivery for analytics, AI, machine learning use cases, and partner systems.
- Build new solutions within corporate environments and bring cross-functional teams together to drive successful delivery., * Flexible working time models: home office and flexible working hours, depending on department and position - many things are possible with us.
- Additional days off ("bridge-days"): more free time through additional days off to bridge single working days between bank holidays and the weekend - without having to use vacation days.
- Canteen & Cafeteria: whether it's coffee and croissant for breakfast, various lunch menus or snacks in between - our subsidized staff restaurant & cafeteria has something for every taste including vegetarian and vegan options.
- Learning & development: diverse training and development opportunities for your personal and professional growth. Because: you never stop learning.
- Health promotion: health is important to us - that's why we offer different programs to promote physical and mental health.
- Public transport ticket: we encourage our employees to use public transport on their daily way to work. Costs for public transport? We cover them!
Requirements
- Strong foundation in HPC and Linux/Unix, with hands-on experience in HPC environments, job schedulers (e.g. SLURM), and container technologies such as Singularity.
- Practical experience with containerized and cloud-native platforms, including OpenShift and Docker, along with a solid understanding of modern deployment concepts.
- Working knowledge of PostgreSQL and general relational database concepts, including querying, integration, and basic performance optimization.
- Experience with workflow orchestration and data pipelines, ideally using Nextflow (considered a strong plus, but not mandatory).
- Knowledge of cloud-based data platforms and services, preferably with hands-on experience with AWS (e.g. S3, Lambda, Glue, Step Functions, Spark) and related technologies such as Snowflake or Databricks, combined with strong analytical skills, a collaborative mindset, and excellent English communication, with a focus on quality, documentation, and engineering best practice.
Benefits & conditions
The minimum gross annual salary for this position is € 64.170 per year (full-time) according to the classification in the collective bargaining agreement for the chemical industry. Depending on professional experience and qualifications, we offer an overpayment.