Data Engineer

Rhenus AG & Co. KG
Hilden, Germany
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English, German

Job location

Remote
Hilden, Germany

Tech stack

Azure
Continuous Integration
Data Governance
Data Infrastructure
ETL
Data Security
Data Systems
Distributed Data Store
Meta-Data Management
Azure
SQL Databases
Parquet
Azure
Spark
PySpark
Avro
Data Management
Azure
Data Pipelines
Databricks

Job description

We are looking for an experienced and solution-oriented Azure Data Platform Engineer to develop, operate, and optimize our modern Azure-based data platform. In this role, you will focus on Azure, Databricks, data infrastructure, and CI/CD, supporting multi-tenant environments and enabling reliable, scalable data solutions., * Develop a Modern Azure Data Platform: Design, build, and operate end-to-end data solutions using Azure Data Factory, Azure Data Lake Storage Gen2, Databricks, and Azure Synapse Analytics.

  • Create Data Pipelines: Develop and maintain scalable ETL/ELT pipelines using PySpark and Spark, with a strong focus on data quality, reliability, and performance.
  • Multi-Tenant & Environment Support: Support and operate multi-tenant data platforms across multiple environments (development, test, production) with clear separation and governance.
  • Infrastructure & Platform Operations: Provision, configure, and maintain Azure data infrastructure, ensuring stability, security, and scalability.
  • CI/CD for Data Platforms: Build and maintain CI/CD pipelines for data pipelines and Databricks workloads, enabling automated deployments across environments.
  • Cost-Efficient & Best-Practice Azure Usage: Apply Azure best practices to optimize performance and cost, including resource sizing, lifecycle management, and cost monitoring.
  • Collaboration with BI & Data Teams: Work closely with BI and data teams to support efficient data models and reporting solutions.
  • Data Governance & Security Basics: Support data governance requirements such as access control, secure data handling, and basic metadata management.

Requirements

Do you have experience in Spark?, * Azure Data Platform Experience: Several years of hands-on experience with Azure Data Factory, ADLS Gen2, Databricks, and Azure Synapse Analytics.

  • PySpark & Spark: Strong experience building distributed data processing pipelines using PySpark and Spark.
  • ETL / ELT Knowledge: Solid understanding of ETL/ELT concepts and data modeling practices.
  • CI/CD & Automation: Experience with CI/CD pipelines for data workloads and basic automation of deployments.
  • SQL Skills: Strong SQL skills and experience optimizing analytical queries.
  • Data Formats: Practical experience with Parquet and/or Avro.
  • Infrastructure Awareness: Good understanding of Azure resource structure, environments, and operational best practices.
  • Analytical & Team-Oriented Mindset: Solution-focused approach with the ability to work independently and collaboratively.
  • Language Skills: Fluency in English is required; knowledge of German is an advantage.

About the company

This is a remote role in the European Union. Candidates need to be based in a country where Rhenus Overland Transport is already established.

Apply for this position