Data Engineer

ABN AMRO Bank N.V.
Amsterdam, Netherlands
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
€ 7.3K

Job location

Remote
Amsterdam, Netherlands

Tech stack

Data analysis
Azure
Cloud Computing
Cloud Storage
Code Review
Information Engineering
Data Integration
ETL
Data Security
Data Systems
Data Visualization
DevOps
Distributed Data Store
Github
Python
Meta-Data Management
Microsoft SQL Server
SQL Azure
MySQL
Performance Tuning
Azure DevOps Pipelines
Azure
SQL Stored Procedures
SQL Databases
Data Ingestion
Azure
Spark
GIT
Data Lake
PySpark
Azure
Software Version Control
Jenkins
Databricks

Job description

You will be part of our developer community, working across multiple teams that include a Product Owner, Data Engineers, Integration Engineers, and Azure DevOps Engineers. We encourage new ideas, and you will have the opportunity to share them in team meetings, where we openly discuss them, as well as in our internal SharePoint blogs or the public-facing ABN AMRO Developer Blog., You know how to explain a problem and solutions, both in detail to a technical crowd and in a simplified way to non-technical people. You know how to organize your time efficiently, are calm and collected when a problem arises in production and are great at prioritizing your work. In addition, you have a proven track record in the following:

  • Data Pipelines & ETL Development
  • Design, build, and maintain scalable data pipelines using Azure Databricks (PySpark) and Spark-based ETL frameworks.
  • Optimize end-to-end data processing workflows using Delta Lake architecture for reliability, performance, and ACID-compliant operations.
  • Data Governance & Storage
  • Implement secure and governed data access using Unity Catalog for metadata management and permissions.
  • Manage data ingestion and storage across ADLS Gen2 and Azure Blob Storage, ensuring efficient file organization, partitioning, and lifecycle management.
  • Data Integration & Analytics
  • Develop integration workflows using Azure Data Factory or Azure Synapse Analytics.
  • Build robust SQL-based transformations with Azure SQL, MS-SQL, MySQL, including relational modelling, stored procedures, and performance tuning.
  • Cloud & DevOps (Azure)
  • Build and deploy data solutions on Azure Databricks, Azure SQL, and storage services.
  • Develop CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins to automate deployments, testing, and workspace integration.
  • Manage secrets and credentials using Azure Key Vault.
  • Testing & Quality
  • Implement unit and integration tests for Spark ETL pipelines and SQL logic.
  • Ensure data quality, validation, and schema enforcement across all pipelines using tools, checkpoints, and best practices.
  • Collaboration & Engineering Practices
  • Work closely with data architects, analysts, and cross-functional engineering teams.
  • Use Git for version control, participate in code reviews, and follow best practices in structured, maintainable code.

Requirements

  • 5+ years
  • Collaborative skills
  • Analytical skills, We are looking for a Data Engineer with strong experience in data-related topics. You will have room to experiment, implement your ideas, and contribute to business-critical data governance, data management, data lineage, and data visualization capabilities. You will have the opportunity to build innovative features that support business growth using Azure PaaS and SaaS offerings., + Strong hands-on experience with Azure Synapse Analytics, Azure Databricks, PySpark, and Spark processing.
  • Expertise in Delta Lake and distributed data processing.
  • Proficiency in Python (PySpark) and SQL for ETL development.
  • Strong understanding of Azure Data Factory, Azure Synapse, Azure SQL, and cloud-based data integration tools.
  • Experience with ADLS Gen2, Azure Blob Storage, and data lake design.
  • Knowledge of Unity Catalog for governance and metadata management.
  • Familiarity with Azure Key Vault for secure secrets handling.
  • Experience with version control using Git.
  • Hands-on experience with Azure DevOps pipelines and CI/CD tools.
  • Soft Skills
  • Excellent analytical and problem-solving abilities.
  • Strong communication and collaboration across engineering and data teams.
  • Ability to work independently and in agile, fast-moving environments.
  • Attention to detail, data quality, and structured thinking.
  • Preferred
  • Minimum 5-7 years of experience working in data engineering or Spark-based ETL environments.
  • Azure / DevOps / Databricks certifications.
  • Experience working with large-scale distributed data systems and enterprise-grade data lakes.

Benefits & conditions

  • € 5.112 - € 7.303 pm
  • Excellent employment conditions
  • In charge of personal development

Why ABN AMRO?

  • Building a future proof bank
  • A diverse and inclusive culture
  • Extensive internal career opportunities, * The gross monthly salary displayed above is based on a 36-hour work week, including vacation pay and benefit budget.
  • The Benefit Budget is 11% of your salary. The Benefit Budget allows you to acquire additional employment benefits. If you make no purchases or reservations in the Benefit Shop in a given month, you are paid one twelfth of your Benefit Budget that month.
  • Five weeks of vacation per year. You have the option to purchase an additional four weeks per year.
  • Personal development Budget of € 1,000 per year, which you can accumulate up to € 3,000.
  • Possibility to work from home (in consultation with your team and depending on your position).
  • An annual public transport pass with free public transportation throughout the Netherlands.
  • An excellent pension scheme.

About the company

ABN AMRO's Central Data Office (CDO) is at the forefront of data-driven innovation, shaping and executing the bank's data strategy to unlock meaningful value across the organisation. With a strong focus on a data-centric future, the CDO brings data governance, engineering, and management together under a unified vision. By championing the Federated Data Governance Model, it ensures high-quality data, consistent standards, and streamlined initiatives that strengthen compliance, efficiency, and organisational alignment. Through strategic roles such as Data Business Partners and an enhanced approach to data ownership, the CDO acts as a catalyst for informed decision-making and drives a culture of excellence in the digital era. Within Data Management Engineering (DME), our mission is to enhance business operations and continuously deliver new capabilities across the organisation. Our engineers, data scientists, and analysts play a crucial role in democratising data through intuitive visualisation tools and governance-focused web portals. Leveraging Azure's platform-as-a-service offerings, we have developed a cloud-based solution to manage and visualise ABN AMRO's metadata (Managed Consuming). These applications are powered by Azure Synapse Analytics, Azure Databricks, Azure Data Factory, Unity Catalog, Azure Purview, and PySpark to perform large-scale ETL operations. The Managed Consuming team produces Integrated Data Sets (IDS) on demand, derived from golden data elements in DIAL, enabling trusted and consistent insights. Through these efforts, we have built a highly interactive, efficient, and scalable cloud-based solution to support the bank's growing data needs.

Apply for this position