Data Engineer

Head Resourcing Ltd
Glasgow, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate
Compensation
£ 55K

Job location

Glasgow, United Kingdom

Tech stack

API
Azure
Code Review
Continuous Integration
Information Engineering
Hive
Metadata
Performance Tuning
SharePoint
SQL Databases
File Transfer Protocol (FTP)
Azure
GIT
Data Lake
PySpark
Bicep
GraphQL
REST
Terraform
Legacy Systems
Databricks

Job description

My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics., Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. * Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. * Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns., Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. * Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. * Assist in performance tuning and cost optimisation., Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. * Support secure deployment patterns using private endpoints, managed identities and Key Vault. * Participate in code reviews and help improve engineering practices.

Requirements

Commercial and proven data engineering experience. * Hands-on experience delivering solutions on Azure + Databricks. * Strong PySpark and Spark SQL skills within distributed compute environments. * Experience working in a Lakehouse/Medallion architecture with Delta Lake. * Understanding of dimensional modelling (Kimball), including SCD Type 1/2. * Exposure to operational concepts such as monitoring, retries, idempotency and backfills.

Mindset

Keen to grow within a modern Azure Data Platform environment. * Comfortable with Git, CI/CD and modern engineering workflows. * Able to communicate technical concepts clearly to non-technical stakeholders. * Quality-driven, collaborative and proactive., Databricks Certified Data Engineer Associate. * Experience with streaming ingestion (Auto Loader, event streams, watermarking). * Subscription/entitlement modelling (e.g., ChargeBee). * Unity Catalog advanced security (RLS, PII governance). * Terraform or Bicep for IaC. * Fabric Semantic Models or Direct Lake optimisation experience.

Apply for this position