Data Engineer [Senior]
Role details
Job location
Tech stack
Job description
Design, build, and optimize scalable data products on a multinational energy company's data platform using Azure Databricks and related services. Implement reusable, modular components for ingestion, transformation, and orchestration with ADF and Databricks notebooks. Apply Lakehouse architecture principles ensuring compliance with governance standards (lineage, cataloging with Unity Catalog, and security). Optimize Spark jobs for cost and performance (partitioning, caching, indexing). Implement CI/CD pipelines for Databricks with Azure DevOps. Collaborate closely with data product owners, architects, and business stakeholders to translate requirements into technical solutions. Mentor junior data engineers and foster a culture of automation, innovation, and continuous learning. Contribute to building a secure, high-performance data ecosystem that enables advanced analytics and business insights. (…and there's always space to explore what excites you.)
️ What You'll Experience
Community: Join a friendly multicultural team that loves to connect, through regular social events, activities, and our yearly company retreat, Pantheon. Flexibility: We encourage a hybrid setup, but there are no mandatory office days. You decide when to come in and how to manage your time. Growth: We support your development with a €1000 annual learning budget, dedicated mentorship, and a clear growth framework. We do annual reviews to recognize your growth and make sure your salary reflects it. Well-being: Benefit from health insurance, flexible benefits, extra vacation days with tenure, and regular check-ins to support your well-being. Purpose: Drive real impact and take ownership of meaningful projects with global clients.
️ A Place Where You Belong
At DEUS, we are deeply committed to building an environment where you can truly be your authentic self. We celebrate diversity in every dimension, fostering an atmosphere where every individual feels valued and respected.
We do not tolerate any form of discrimination. We maintain a clear 'no assholes' policy, applying to both our team and our clients, ensuring a respectful and positive dynamic for all.
Requirements
Do you have experience in Unity?, Solid experience designing and implementing end-to-end data pipelines leveraging Azure Databricks, Delta Lake, and Azure Data Lake Storage (ADLS). Hands-on expertise with Apache Spark, PySpark, and modular data ingestion/transformation components. Proficiency in Azure services such as Data Factory (ADF), Azure SQL Database, Synapse Analytics, Key Vault, Logic Apps. Strong programming skills in Python and SQL, plus automation scripting. Experience with CI/CD pipelines (Azure DevOps, GitHub) and Git-based version control. A solid understanding of Lakehouse architecture principles (Medallion architecture) for structured and semi-structured data. Knowledge of ELT/ETL design, data modeling, orchestration, and event-driven architectures. A mindset grounded in values like Integrity, Curiosity, Creation, and Collaboration.
Bonus points if you have: Familiarity with Collibra for data cataloging and governance. Certifications such as Microsoft Certified Data Engineer Associate or Databricks Professional. Experience with serverless functions and advanced automation. Previous mentoring experience, contributing to DataOps best practices and reusable components.