Senior Data Engineer
Role details
Job location
Tech stack
Job description
We're now looking for a Senior Data Engineer to join our Product & Technology Group (CAPTG) and help evolve our enterprise-wide data platform - the single source of truth that supports analytics, reporting, data science and decision-making across Cox Automotive Europe.
If you enjoy building modern, cloud-native data platforms at scale and want your work to have real business impact, this is a role where you'll thrive.
What you'll be working on
You'll be part of a highly skilled Data Engineering team responsible for designing, building and running a strategic data platform that underpins everything we do with data.
You'll work on things like:
Designing and building robust, scalable data pipelines that ingest and transform data from multiple systems
Delivering high-quality, trusted data that powers analytics, reporting and data science
Building and operating pipelines using Azure Databricks, PySpark and SQL
Working with streaming and event-driven data (Auto Loader, Structured Streaming, Event Hubs)
Helping shape the architecture and roadmap of our enterprise data platform
Improving data quality, monitoring and reliability through automation and best practices
Collaborating with software engineers, data scientists and product teams to deliver real business value
This is not a maintenance role - you'll be helping to build and evolve a platform designed to support Cox Automotive for years to come.
The tech you'll use
We're cloud-native and focused on using managed services so our engineers can focus on solving problems, not running infrastructure.
You'll work with:
Azure Databricks (ELT pipelines, Auto Loader, Structured Streaming, Unity Catalog)
Requirements
Python, PySpark and SQL
Azure (Event Hubs, Storage Accounts, Key Vault, networking & security)
CI/CD using Azure DevOps and/or GitHub
Modern data architectures (lakehouse, distributed systems, cloud-native design)
If you also have experience with AWS, Terraform, MLflow or advanced data modelling, that's a big plus.
What we're looking for
We're looking for someone who enjoys owning data solutions end-to-end and wants to work on a platform that really matters to the business.
You'll bring:
Experience building and running large-scale data pipelines
Strong SQL skills and experience working with relational data
Python and Spark (PySpark) for data processing
Experience using Databricks in production
A solid understanding of cloud data platforms and distributed systems
A mindset focused on quality, reliability and performance
Just as important, you'll be someone who:
Communicates well with both technical and non-technical stakeholders
Enjoys collaborating across teams
Takes ownership, spots problems early and drives them to resolution
Likes mentoring and raising the bar for those around them