Senior Data Engineer
Role details
Job location
Tech stack
Job description
The purpose of this role is to design, build, and maintain scalable data pipelines and infrastructure that enable the efficient processing and analysis of large, complex data sets.
This role is designed for impact, and we believe our best work happens when we connect. While we operate a flexible model, we expect you to spend time on site (at our offices or a client location) for collaboration sessions, customer meetings, and internal workshops., Develop and maintain automated data processing pipelines using Google Cloud:
- Design, build, and maintain data pipelines to support data ingestion, ETL, and storage
- Build and maintain automated data pipelines to monitor data quality and troubleshoot issues
Implement and maintain databases and data storage solutions:
- Stay up-to-date with emerging trends and technologies in big data and data engineering
- Ensure data quality, accuracy, and completeness
Implement and enforce data governance policies and procedures to ensure data quality and accuracy:
- Collaborate with data scientists and analysts to design and optimise data models for analytical and reporting purposes
- Develop and maintain data models to support analytics and reporting
- Monitor and maintain data infrastructure to ensure availability and performance
Requirements
Do you have experience in Spark?, * Experience in contributing to technical decision making during in-flight projects.
- A track record of being involved in a wide range of projects with various tools and technologies, and solving a broad range of problems using your technical skills.
- Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers
- Significant experience with cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle.
- Experience with big data technologies such as Hadoop, Spark, or Hive.
- Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow.
- Proficiency in Python and at least one other programming language such as Java, or Scala.
- Willingness to mentor more junior members of the team.
- Strong analytical and problem-solving skills with the ability to work independently and in a team environment.
Benefits & conditions
We believe in supporting our team members both professionally and personally. Here's how we invest in you:
Compensation and Financial Wellbeing
- Competitive base salary.
- Matching pension scheme (up to 5%) from day one.
- Discretionary company bonus scheme.
- 4 x annual salary Death in Service coverage from day one.
- Employee referral scheme.
- Tech Scheme.
Health and Wellness
- Private medical insurance from day one.
- Optical and dental cash back scheme.
- Help@Hand app: access to remote GPs, second opinions, mental health support, and physiotherapy.
- EAP service.
- Cycle to Work scheme.
Work-Life Balance and Growth
- 36 days annual leave (inclusive of bank holidays).
- An extra paid day off for your birthday.
- Ten paid learning days per year.
- Flexible working hours.
- Market-leading parental leave.
- Sabbatical leave (after five years).
- Work from anywhere (up to 3 weeks per year).
- Industry-recognised training and certifications.
- Bonusly employee recognition and rewards platform.
- Clear opportunities for career development.
- Length of Service Awards.
- Regular company events.