Senior Data Engineer / Power BI
Head Resourcing Ltd
Glasgow, United Kingdom
3 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
Senior Compensation
£ 80KJob location
Glasgow, United Kingdom
Tech stack
Microsoft Excel
API
Data analysis
Azure
Software as a Service
Cloud Computing
Continuous Integration
Information Engineering
Software Design Patterns
DevOps
Hive
Python
Power BI
SharePoint
SQL Databases
Data Streaming
File Transfer Protocol (FTP)
Azure
Spark
GIT
Data Lake
PySpark
Bicep
Terraform
Databricks
Job description
- Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL
- Implement ingestion patterns for files, APIs, SaaS platforms, SQL sources, SharePoint, and SFTP
- Apply expectations for data quality, schema validation, and operational reliability
- Build clean, conformed Silver/Gold models aligned to business domains
- Deliver star schemas, harmonisation logic, and business marts for Power BI datasets
- Apply governance and lineage via Unity Catalog
- Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory
- Implement monitoring, alerting, SLAs, and cost-optimisation
- Build CI/CD pipelines in Azure DevOps for notebooks and pipelines
- Ensure secure platform operation using private endpoints and managed identities
- Contribute to platform standards and design patterns
- Collaborate with BI/Analytics teams to power dashboards
- Influence architecture decisions and uplift engineering maturity
Technologies:
- Azure
- CI/CD
- Databricks
- DevOps
- Git
- Power BI
- PySpark
- SQL
- Security
- SharePoint
- Spark
- Terraform
- Unity
- Cloud
- GameDev
- Excel
- Fabric
- Support
- Python, We are a rapidly scaling UK consumer brand located in Glasgow, seeking a Lead Data Engineer to join our team. You will be at the forefront of our data modernisation programme, moving towards a fully automated Azure Enterprise Landing Zone and Databricks Lakehouse. This is a unique opportunity to influence architecture and help define best practices. We offer a dynamic work environment with opportunities for professional growth and collaboration across various business functions.
Requirements
- 5-8+ years of Data Engineering experience
- 2-3+ years delivering production workloads on Azure and Databricks
- Strong expertise in PySpark and Spark SQL
- Proven experience with Medallion/Lakehouse delivery using Delta Lake
- Solid understanding of dimensional modelling (Kimball) including surrogate keys and SCD types
- Operational experience with SLAs, observability, and idempotent pipelines
- Familiarity with secure Azure Landing Zone patterns
- Comfort with Git, CI/CD, and automated deployments
- Clear communication skills to translate technical decisions into business outcomes
- Databricks Certified Data Engineer Associate (nice to have)
- Streaming ingestion experience (nice to have)
- Subscription/entitlement modelling experience (nice to have)
- Advanced Unity Catalog security knowledge (nice to have)
- Experience with Terraform/Bicep for IaC (nice to have)