Senior Data Engineer (ID:3414)
Afdrukken E-mailen
Amstelveen, Netherlands
2 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Amstelveen, Netherlands
Tech stack
Amazon Web Services (AWS)
Azure
Information Engineering
ETL
Data Transformation
Data Security
Data Systems
Data Vault Modeling
Python
Metadata
Operational Databases
Performance Tuning
SQL Databases
Technical Data Management Systems
Data Processing
Snowflake
GIT
PySpark
Semi-structured Data
Data Pipelines
Docker
Job description
- Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and business decision-making.
- Extract, analyze, and manage existing Snowflake roles, grants, privileges, and related metadata.
- Design and implement a configuration-driven management framework using version-controlled files stored in Git.
- Ensure data security, integrity, and compliance during Snowflake role migration, with an initial focus on data roles automation.
- Build, optimize, and support ETL/ELT pipelines for structured and semi-structured data from multiple sources.
- Monitor, troubleshoot, and enhance production data pipelines for performance, reliability, and scalability.
- Implement and maintain Data Vault data models in Snowflake to support large-scale analytics and BI use cases.
- Write, optimize, and maintain high-performing SQL queries for data transformation and reporting.
- Collaborate closely with data architects, product managers, analysts, and data scientists to deliver impactful data solutions.
- Engage with business stakeholders to translate requirements into scalable technical solutions.
- Work with cloud platforms (AWS/Azure), DBT, and Snowflake to deliver cloud-native data solutions.
Requirements
- 6-8 years of hands-on experience in data engineering roles.
- Advanced proficiency in Python for automation and data engineering workflows.
- Strong command of SQL for complex transformations and performance optimization.
- Experience with DBT for data transformation and modeling.
- Solid understanding of ETL/ELT architectures and pipeline orchestration.
- Hands-on experience with cloud platforms (AWS and/or Azure).
- Experience implementing Data Vault modeling in Snowflake.
- Familiarity with Docker and modern data engineering tooling.
- Exposure to PySpark and large-scale data processing frameworks.
You Should Possess the Ability to
- Design scalable, secure, and maintainable data architectures.
- Automate infrastructure and data workflows using Python and SQL.
- Translate business and analytical requirements into technical data solutions.
Benefits & conditions
- Opportunity to work on cutting-edge data engineering solutions using modern cloud and analytics technologies.
- Exposure to large-scale retail data platforms supporting business-critical analytics.
- A collaborative, innovative environment that values technical excellence and continuous improvement.
- Opportunities for professional growth, skill enhancement, and technical leadership.