Databricks Data Engineer

Harnham
Charing Cross, United Kingdom
10 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 156K

Job location

Charing Cross, United Kingdom

Tech stack

Clean Code Principles
Airflow
Data analysis
Big Data
Data Architecture
Information Engineering
Data Governance
Python
Machine Learning
Software Engineering
SQL Databases
Data Streaming
Backend
Apache Flink
Real Time Data
Kafka
Operational Systems
Data Management
Confluent
Databricks

Job description

As a Senior Data Engineer, you'll play a key role in developing and optimising the backbone of the company's data platform. You'll be responsible for building and maintaining large-scale, real-time data pipelines that power analytics, machine learning, and operational systems across the business.

You'll collaborate with software engineers, data scientists, and analytics teams to ensure the platform delivers reliable, high-quality, and compliant data at scale. This is a hands-on engineering role that blends software craftsmanship with data architecture expertise., * Design and implement high-throughput data streaming solutions using Kafka, Flink, or Confluent.

  • Build and maintain scalable backend systems in Python or Scala, following clean code and testing principles.
  • Develop tools and frameworks for data governance, privacy, and quality monitoring, ensuring full compliance with data protection standards.
  • Create resilient data workflows and automation within Airflow, Databricks, and other modern big data ecosystems.
  • Implement and manage data observability and cataloguing tools (e.g., Monte Carlo, Atlan, DataHub) to enhance visibility and reliability.
  • Partner with ML engineers, analysts, and analytics engineers to understand their data needs and enable advanced data use cases.
  • Contribute to an engineering culture that values testing, peer reviews, and automation-first principles.

Requirements

  • Strong experience in streaming technologies such as Kafka, Flink, or Confluent.
  • Advanced proficiency in Python or Scala, with a solid grasp of software engineering fundamentals.
  • Proven ability to design, deploy, and scale production-grade data platforms and backend systems.
  • Familiarity with data governance frameworks, privacy compliance, and automated data quality checks.
  • Hands-on experience with big data tools (Airflow, Databricks) and data observability platforms.
  • Collaborative mindset and experience working with cross-functional teams including ML and analytics specialists.
  • Curiosity and enthusiasm for continuous learning - you stay up to date with the latest tools and trends in data engineering and love sharing knowledge with others., * sql
  • engineer
  • data engineer
  • airflow
  • kafka
  • databricks
  • dbt

About the company

We're partnering with a leading online retail company that's transforming the way data and real-time intelligence shape customer experiences. Their mission is to harness cutting-edge data and streaming technologies to drive smarter decisions, improve efficiency, and create personalised journeys for millions of shoppers worldwide.

Apply for this position