Data Engineer

Abbott
Weesp, Netherlands
5 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
Dutch, English, Spanish, German
Experience level
Intermediate

Job location

Weesp, Netherlands

Tech stack

Amazon Web Services (AWS)
Amazon Web Services (AWS)
Code Review
Computer Programming
Databases
Data Cleansing
Information Engineering
ETL
Data Systems
Python
Performance Tuning
Power BI
Software Engineering
Unstructured Data
Spark
PySpark
Information Technology
Kafka
Data Pipelines
Databricks

Job description

We are looking for a Data Engineer passionate about building robust and scalable data solutions. In this role, you will design and implement data architectures and pipelines that enable efficient processing, integration, and visualization to support business objectives. You will collaborate closely with technology and engineering teams to deliver high-quality, innovative solutions., * Design and implement data pipelines for various projects and initiatives.

  • Develop and maintain optimal pipeline architecture using AWS native services.
  • Design and optimize data models on AWS Cloud with Databricks, Redshift, RDS, and S3.
  • Integrate and assemble large, complex datasets to meet diverse business requirements.
  • Perform ETL processes: read, transform, stage, and load data into selected tools and frameworks.
  • Customize and manage integration tools, databases, warehouses, and analytical systems.
  • Enhance internal Python libraries for pipeline development.
  • Monitor and optimize data performance, uptime, and scalability.
  • Create architecture and design documentation and contribute to best practices.
  • Participate in technical planning, design, and peer code reviews.
  • Stay current with emerging trends and recommend innovative solutions.

Requirements

Must-have:

  • Bachelor's degree in Computer Science, IT, or a related field.
  • Minimum 3 years of experience in Data Engineering or Software Engineering.
  • Hands-on experience with AWS, Databricks, and/or Spark.
  • Strong programming skills in Python, ideally PySpark or Kafka.
  • Knowledge of strategies for processing large volumes of structured and unstructured data, including multi-source integration.
  • Experience with data cleaning, wrangling, visualization, and reporting.
  • Excellent communication skills and ability to work effectively in a distributed team.
  • Fluent in English, written and spoken.

Nice-to-have:

  • Experience developing Power BI reports.
  • Familiarity with BI applications, data quality, and performance tuning.
  • Ability to communicate in Dutch, German or Spanish., A proactive and independent Data Engineer who thrives in a global environment. You are not only technically skilled but also an excellent communicator who can build strong relationships across teams and regions. Collaboration is key in this role, so you should be comfortable working with diverse stakeholders and aligning technical solutions with business objectives. Your ability to work autonomously, adapt to change, and maintain clarity in complex situations will set you apart.

Apply for this position