Data Engineer (Kafka, Hadoop, Spark, Python, DBT) - Manchester
Contracts IT
Manchester, United Kingdom
2 days ago
Role details
Contract type
Temporary contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Manchester, United Kingdom
Tech stack
Agile Methodologies
Amazon Web Services (AWS)
Cloud Computing
Continuous Integration
Information Engineering
Data Security
Data Vault Modeling
Hadoop
Python
Spark
Kafka
Data Pipelines
Job description
Our client is looking for an experienced Data Engineer to design and deliver scalable data pipelines and help build a modern, high-performance data platform. You will work with cross-functional teams to ensure data is reliable, secure, and easily accessible for analytics and product development., * Build and maintain scalable data pipelines and data models.
- Ensure data quality, governance, monitoring, and security best practices.
- Troubleshoot and optimise data workflows.
- Support analytics teams with data access and insights.
- Provide technical guidance and mentor junior engineers where needed.
Requirements
- 5+ years data engineering experience.
- Strong experience with Kafka, Hadoop, Spark, DBT (Python also considered).
- Data modelling experience (Dimensional or Data Vault).
- CI/CD and Agile experience.
- Cloud experience (AWS preferred).
- Strong communication and collaboration skills., Relevant degree or equivalent experience.