Databricks Data Engineer (x3)

Anson McCade
Nottingham, United Kingdom
2 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 156K

Job location

Nottingham, United Kingdom

Tech stack

Azure
Continuous Integration
Information Engineering
Hive
Python
Azure DevOps Pipelines
Software Engineering
SQL Databases
Data Streaming
YAML
Azure
Spark
GIT
PySpark
Software Coding
Terraform
Azure
Data Pipelines
Serverless Computing
Databricks

Job description

We're looking for a Senior Data Engineer to support delivery of a modern Azure-based data platform, with Azure Databricks as the core engineering and processing layer. You'll be responsible for building scalable, production-grade data pipelines and ensuring strong engineering discipline across the platform., * Design, build, and maintain data pipelines using Azure Databricks (Apache Spark)

  • Develop production-quality PySpark code following software engineering best practices
  • Use SQL (Spark SQL/Databricks SQL) for transformations and analytics-ready data preparation
  • Engineer reliable batch and near-Real Time ingestion workflows
  • Implement orchestration patterns using Azure Data Factory
  • Optimise Spark workloads for performance and cost efficiency
  • Contribute to Azure Synapse components where applicable
  • Work within secure-by-design cloud guardrails and enterprise standards
  • Apply CI/CD practices using Azure DevOps pipelines
  • Produce high-quality documentation (data flows, runbooks, onboarding guides)
  • Mentor junior engineers and embed coding standards and reusable patterns

Requirements

  • Strong Azure Databricks experience (development and operational support)
  • Advanced Python with strong PySpark capability
  • Strong SQL skills
  • Azure Data Factory (pipelines, triggers, monitoring, reruns)
  • CI/CD and Git-based workflows
  • Solid data engineering fundamentals (data modelling, data quality, operational support)
  • Experience tuning and optimising Spark workloads

Desirable

  • Enterprise-scale Databricks delivery experience
  • Unity Catalog or Databricks governance exposure
  • Azure Functions or Logic Apps
  • Azure DevOps YAML pipeline authoring
  • Terraform awareness
  • Experience operating in regulated environments

Apply for this position