Data Engineer

Yacht
Amsterdam, Netherlands
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Amsterdam, Netherlands

Tech stack

Airflow
Azure
Software Quality
ETL
Data Structures
Relational Databases
Distributed Computing Environment
Python
Standard Sql
Software Engineering
Spark
PySpark

Job description

Using Apache Spark and Azure Databricks, you'll ensure seamless data delivery to critical reporting and risk systems within a modern Python environment.

  • Lead the onboarding of new financial data sources from schema definition to production deployment.

  • Develop and maintain modular ETL processes using PySpark and clean architecture principles.

  • Ensure data integrity through rigorous unit and integration testing within a CI/CD pipeline.

  • Collaborate on future-state architecture (DMF) activities to stay ahead of financial data trends.

Requirements

  • You possess at least 5 years of experience in software development with a deep focus on Python.

  • You have extensive experience with Apache Spark or PySpark for distributed data processing.

  • You have demonstrable knowledge of ETL methodologies and working with SQL and relational databases.

  • You are accustomed to working within an Azure DevOps environment with full CI/CD integration.

  • Strategic Impact: You understand the complexity of financial data and translate this into robust solutions.

  • Quality Driven: You maintain high standards for code quality and strive for full transparency in your work.

  • Stakeholder Management: You act as an equal sparring partner for both the team and the business.

  • Analytical Excellence: You easily navigate complex data structures and architectures.

Apply for this position