Data Engineer

Uni Systems
Brussels, Belgium
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Brussels, Belgium

Tech stack

Artificial Intelligence
Amazon Web Services (AWS)
Data analysis
Confluence
JIRA
Azure
Cloud Computing
ETL
Data Systems
Database Design
DevOps
Github
Python
KNIME
Machine Learning
Natural Language Processing
Power BI
SharePoint
SQL Databases
Data Processing
Microsoft Power Automate
Large Language Models
Spark
Gitlab
GIT
Build Management
Microsoft Fabric
Amazon Web Services (AWS)
low-code
Wikis
Physical Data Models
Dataiku
Azure
Software Version Control
Data Pipelines

Job description

  • Collect and analyze business requirements; define robust data models and scalable architectures.
  • Design and build scalable, reliable data pipelines and workflows in cloud environments.
  • Apply DevOps practices, including Git-based version control and participation in CI/CD pipelines.
  • Ensure data quality, security, and governance standards are maintained across all data-related activities.
  • Collaborate with cross-functional teams to align data solutions with business needs and quality expectations.
  • Specify and design presentation interfaces with optimal usability and user experience.
  • Document processes and tasks to ensure transparency, explainability, and team-wide understanding.
  • Support the integration of AI-based enrichment and transformation processes into existing data pipelines and workflows.

Requirements

Do you have experience in Usability?, Do you have a Master's degree?, * Master's degree in IT with minimum 11 years of relevant experience (or Bachelor's degree and minimum of 15 years relevant experience).

  • Εxcellent knowledge in Python, Spark and SQL.
  • Εxcellent knowledge in designing and building ETL pipelines using tools such as Azure Synapse, Microsoft Fabric and/or AWS Glue.
  • Εxcellent knowledge of data modelling and database design principles using the Medallion Architecture.
  • Good knowledge of business intelligence tools, notably Microsoft Power BI.
  • Knowledge of Machine Learning, Natural Language Processing and Large Language Models (LLMs) fundamentals.
  • Ability to integrate AI and ML techniques into data workflows for enrichment, categorisation and transformation.
  • Strong, hands-on coding ability for data processing, analytics, and automation.
  • Ability to collect, analyse and translate business needs into technical specifications.
  • Skills in designing conceptual, logical and physical data models.
  • Ability to extract, transform, load, clean and merge datasets from multiple sources.
  • Experience with automated workflows and orchestration tools.
  • Ability to handle large and complex datasets efficiently.
  • Desirable
  • Understanding of Microsoft Power Platform (e.g., Power Automate, SharePoint Lists).
  • Good knowledge of Microsoft Fabric components, including Lakehouses, Pipelines, Dataflows Gen2, Notebooks, and Semantic Models.
  • Good knowledge of cloud environments (AWS or Microsoft Azure).
  • Understanding of DevOps practices, including Git workflows and CI/CD pipelines, with experience using tools such as Azure DevOps, GitHub, and GitLab.
  • Knowledge of no-code/low-code data science platforms such as KNIME and/or Dataiku.
  • Experience documenting and organizing processes using task management tools (e.g., Jira, OpenProject) and documentation platforms (e.g., Confluence, GitLab Wiki, GitHub Wiki).
  • Proficiency in English at least at level B2.

Apply for this position