Data Engineer

Weeve Data
Kingston upon Thames, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
£ 60K

Job location

Remote
Kingston upon Thames, United Kingdom

Tech stack

Artificial Intelligence
Azure
Cloud Computing
Continuous Integration
Data as a Services
Data Cleansing
Information Engineering
Data Governance
ETL
Data Systems
DevOps
Github
Hive
Python
SQL Azure
Power BI
Azure
SQL Databases
Data Streaming
T-SQL
Spark
GIT
Microsoft Fabric
PySpark
Information Technology
Kafka
Azure
Software Version Control
Data Pipelines
Databricks

Job description

  • Collaborate with stakeholders to manage client engagement throughout project delivery, gathering and translating business requirements into technical, actionable data solutions.
  • Develop and maintain end-to-end ETL/ELT data pipelines using Microsoft Fabric (Data Factory, Notebooks, Dataflows Gen2).
  • Build, optimize, and maintain architectures using Medallion (Bronze/Silver/Gold) patterns.
  • Create high-quality semantic models, star schemas, and data models optimized for Power BI and analytical reporting.
  • Utilize PySpark, Spark SQL, and T-SQL for complex data cleansing, transformation, and processing of large-scale datasets.
  • Build dynamic, interactive reports and dashboards at strategic, analytical, and operational levels with Power BI using advanced DAX expressions.
  • Monitor, troubleshoot, and tune Fabric workloads (pipelines, SQL endpoints) to improve performance and cost-efficiency.
  • Implement data quality, validation, security, and governance best practices, including row-level security and workspace management.
  • Deliver technical hands-on workshop training courses for external clients.

Technologies:

  • Azure
  • CI/CD
  • Databricks
  • DAX
  • DevOps
  • ETL
  • Fabric
  • Git
  • GitHub
  • Support
  • Kafka
  • Power BI
  • Python
  • PySpark
  • SQL
  • Security
  • Spark
  • Cloud
  • AI

Requirements

  • Proven hands-on experience with Microsoft Fabric workloads, including Lakehouse, Data Factory, and Notebooks.
  • Strong expertise in Python/PySpark, SQL, DAX, and Spark SQL.
  • Solid understanding of Azure data services (ADF, Azure Data Lake, Databricks, Synapse, Azure SQL Database, Keyvault).
  • Strong understanding of Kimball dimensional modelling and data Lakehouse concepts.
  • Experience with Git for version control and CI/CD pipelines (Azure DevOps/GitHub).
  • Bachelors degree in Computer Science, Data Engineering, or equivalent experience. Microsoft Fabric (DP-700/DP-600) certification is highly preferred.
  • Knowledge of Data Governance tools like Microsoft Purview (nice to have).
  • Familiarity with streaming technologies (Kafka, Event Hubs) (nice to have).
  • Experience in migrating legacy SQL systems to Microsoft Fabric (nice to have).
  • Databricks experience (nice to have).

Benefits & conditions

We are an ambitious company seeking a skilled and passionate Fabric Data Engineer Consultant to design, build, and maintain scalable analytics and data engineering solutions for our external clients. We offer the opportunity to work on greenfield projects, a competitive salary and benefits package, hybrid and flexible working options, and support for continuous learning, training, and certification. Our team is dedicated to delivering high-quality solutions while maintaining a strong focus on professional development. The position is remote, allowing flexibility in work location.

Apply for this position