Medior Data Engineer - Databricks & Lakehouse Platform (Cloud)
Fugro Group
Nootdorp, Netherlands
2 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
IntermediateJob location
Remote
Nootdorp, Netherlands
Tech stack
API
Artificial Intelligence
Amazon Web Services (AWS)
Azure
Cloud Computing
Cloud Storage
Code Review
Databases
Continuous Integration
Information Engineering
Data Governance
Hive
Python
Standard Sql
Runbook
Spark
Data Lake
PySpark
Databricks
Job description
Fugro is a global leader in Geo-data, building a modern cloud-based lakehouse platform on Databricks to unlock high-quality, governed data at scale. As we accelerate our digital transformation, we are looking for a hands-on Medior Data Engineer who enjoys building reliable cloud data pipelines, curating trusted datasets, and enabling analytics and AI across the organisation. You will work closely with engineering, analytics, data science and domain teams to deliver production-grade data products that power Fugro's global operations.
Your role
- You design, build and maintain cloud-native batch and streaming pipelines on Databricks using Lakeflow, Delta Lake and orchestration tools.
- You implement the medallion architecture (bronze/silver/gold) to deliver trustworthy, performant datasets.
- You develop transformations in PySpark and Spark SQL and help standardise reusable engineering patterns.
- You take ownership of data quality, observability and operational reliability through validation, monitoring, alerting and runbooks.
- You work with Unity Catalog to ensure governed access, documentation and lineage-aware development.
- You translate analytical and business requirements into scalable, pragmatic datasets that enable BI, analytics and AI.
- You contribute to engineering best practices, including code reviews, testing approaches, CI/CD and cost-efficient cloud usage.
- You collaborate closely with platform, analytics and domain teams to deliver production-ready data products end-to-end.
Requirements
- You bring 3-6 years of experience in data engineering or equivalent experience delivering production-grade cloud pipelines.
- You have strong Python and SQL skills and hands-on experience with Apache Spark on Databricks.
- You have worked in at least one major cloud environment (AWS, Azure or GCP) and understand cloud storage and security concepts.
- You are familiar with lakehouse principles, Delta Lake and incremental processing patterns.
- You understand data governance concepts such as catalogs, permissions, lineage and data contracts (Unity Catalog is a plus).
- You have experience with common ingestion patterns (files, APIs, databases) and know how to balance reliability, performance and cost.
- You communicate clearly and collaborate effectively with engineers, analysts and business stakeholders.
Benefits & conditions
- A competitive salary;
- 29 holidays per year based on a fulltime employment (of which 4 are appointed by Fugro management) and the possibility to purchase 12 additional days;
- Extensive career & training opportunities both nationally and internationally;
- Flexible working hours and the ability to work from home in accordance with your manager and corporate policies;
- Commuting allowance;
- Modern pension scheme;
- Collective health insurance;
- Possibility to register with our corporate fitness plan;
- Coaching options through our EAP (Employee Assistance Program).
Are you interested?
About the company
* If we are both still positive after the second interview, we will make you an offer and with that we hope to welcome you at Fugro!