Senior Data Engineer
Role details
Job location
Tech stack
Job description
FDM is a global business and technology consultancy seeking a Senior Data Engineer to work for our client within the finance sector. This is initially a 12 month contract with the potential to extend and will be a hybrid role based in Bristol. Our client is seeking a Senior Data Engineer to design, build, and maintain robust end-to-end data pipelines across both batch and streaming workloads. The role focuses on developing scalable, cloud-native data solutions and performing complex data transformations using tools such as SQL, Python, and dbt, working with large-scale datasets in platforms like BigQuery and Cloud Storage. You'll be responsible for ensuring high standards of data quality, governance, security, and compliance, while optimising performance, cost, and reliability across the data platform., * Design, build, and maintain end-to-end data pipelines (batch & streaming)
- Develop scalable data solutions using cloud-native services
- Perform complex data transformations using SQL, Python, and dbt
- Work with large-scale datasets in BigQuery / Cloud Storage
- Implement data quality, validation, monitoring, and governance standards
- Optimize performance, cost, and reliability of data platforms
- Apply CI/CD pipelines and Infrastructure-as-Code practices
- Collaborate with cross-functional Agile teams
- Mentor junior engineers and provide technical leadership
- Ensure compliance with security, risk, and regulatory standards
Requirements
Do you have experience in Terraform?, * Strong, hands-on experience in Data Engineering role
- Advanced proficiency in SQL and Python
- Experience working with cloud platforms, preferably Google Cloud Platform (GCP), including BigQuery, Dataflow, Cloud Composer, and Cloud Storage
- Experience with AWS or Azure is also acceptable
- Solid understanding of ETL / ELT frameworks and data modelling best practices
- Experience with streaming technologies such as Kafka and/or Pub/Sub
- Hands-on experience with big data processing tools including Spark, Dataflow, and/or Flink
- Experience using CI/CD tools such as Jenkins, Harness, or GitLab
- Strong Infrastructure-as-Code experience, particularly with Terraform
- Experience with containerisation technologies including Docker and Kubernetes
- Familiarity with Agile / Scrum delivery methodologies
Desirable Experience
- Experience with dbt
- Experience with Apache Airflow and/or Cloud Composer
- Exposure to building and maintaining ML data pipelines
- Experience within Banking or Financial Services environments
- Knowledge of data governance, security, and compliance frameworks