Senior Data Engineer

TUV AUSTRIA BELGIUM NV
Rotselaar, Belgium
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
Dutch, English
Experience level
Senior

Job location

Rotselaar, Belgium

Tech stack

Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Azure
Big Data
Google BigQuery
Cloud Computing
Information Systems
Data Architecture
Information Engineering
Data Governance
Data Integration
ETL
Data Transformation
Data Systems
Data Warehousing
Distributed Systems
Data Flow Control
Github
Python
Machine Learning
Performance Tuning
Power BI
DataOps
Azure
SQL Databases
Data Streaming
Tableau
Unstructured Data
Azure
Snowflake
Spark
Microsoft Fabric
Containerization
Data Lake
Kubernetes
Information Technology
Kafka
Cosmos DB
Machine Learning Operations
Terraform
Data Pipelines
Docker
Databricks

Job description

  • Data Architecture & Pipelines: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and deliver structured and unstructured data.
  • Data Governance & Quality: Implement governance frameworks, data catalogs, and quality assurance processes to ensure compliance, integrity, and security of all data assets.
  • Data Integration & Storage: Manage data lakes, warehouses, and streaming platforms; optimize storage solutions for performance and cost-efficiency.
  • Enable AI & Analytics: Prepare and curate datasets for machine learning and advanced analytics projects, ensuring accessibility and reliability for data scientists and AI engineers.
  • Performance Optimization: Monitor and optimize data workflows for speed, scalability, and resilience in cloud and on-prem environments.
  • Collaboration: Work closely with cross-functional teams (data scientists, software engineers, project managers) to deliver end-to-end data solutions.
  • Continuous Improvement: Stay updated on emerging technologies in data engineering, big data, and cloud platforms; drive adoption of best practices.
  • Client Interaction: Support clients in understanding data requirements, present technical solutions, and provide expertise on data-driven strategies.

Requirements

  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
  • Experience: 3-5 years in data engineering, big data, or data platform development.
  • Technical Skills:
  • Strong proficiency in Python and SQL; experience with distributed systems (Spark, Databricks).

  • Cloud Platforms: o Microsoft Azure: Azure Data Factory, Azure Data Lake Storage, Microsoft Fabric,Azure Cosmos DB. o Other Clouds: AWS (Glue, RDS, Athena, Redshift, S3), GCP (BigQuery, Dataflow).

  • Data Orchestration & Workflow: Airflow, Prefect, Dagster, ADF

  • Data transformation: dbt

  • Data Warehousing & Modeling: Snowflake, DuckDB, BigQuery.

  • Streaming & Real-Time Processing: Kafka, Azure Event Hub.

  • Visualization & BI: Power BI, Tableau.

  • Familiarity with CI/CD pipelines (Azure DevOps, GitHub Actions) and containerization (Docker, Kubernetes).

  • Infrastructure as Code: Terraform

  • Analytical Thinking: Ability to design efficient data workflows and troubleshoot complex data issues.
  • Communication: Clear and concise communication skills for technical and non-technical stakeholders.
  • Ethical Standards: Commitment to data privacy, security, and compliance with relevant regulations.
  • Bonus: Experience with MLOps and DataOps.
  • Fluency in both written and spoken Dutch and English is required.

Benefits & conditions

  • Opportunity to work on cutting-edge data and AI projects with diverse clients.
  • Collaborative and supportive work environment.
  • Continuous learning and professional development opportunities.
  • Competitive salary and benefits package.
  • Flexible working hours and work-life balance initiatives.

Apply for this position