Data Engineer
Role details
Job location
Tech stack
Job description
They're now looking for a Data Engineer to help develop and support the data pipelines that power customer reporting and analytics. You'll be working with large datasets, building ETL processes, improving performance, and generally making sure the data is accurate, reliable and scalable.
It's a collaborative environment where engineers are trusted to solve problems, suggest improvements, and get involved in shaping how the platform evolves.
What you'll be doing
You'll spend your time building and improving data pipelines, working with both batch and real time processing across large datasets. A typical week might involve:
- Writing and improving ETL jobs
- Working with Python and SQL to process and transform data
- Investigating and fixing tricky data issues
- Improving pipeline performance and reliability
- Collaborating with product and engineering teams to turn requirements into working data solutions
- Reviewing code and sharing knowledge across the team
- Supporting production systems when needed
The platform uses a mix of modern data tooling including AWS and Apache Spark, so there's plenty of opportunity to work with big data technologies.
Requirements
If you enjoy solving messy data problems and turning them into clean, reliable pipelines, this could be worth a look., You don't need to tick every box, but the sort of background that tends to work well here includes:
- Experience working as a Data Engineer or in a similar data focused role
- Strong Python and SQL skills
- Experience working with cloud platforms (AWS or similar)
- Confidence working with large datasets and complex data problems
- An interest in improving pipelines, tooling and processes
- A collaborative approach and willingness to share ideas
Exposure to tools like Apache Spark, modern data platforms, or AI tools for development productivity would be a bonus but isn't essential.