Senior Data Engineer
Role details
Job location
Tech stack
Job description
This is a Senior Data Engineer role within a high-performing actuarial and analytics function operating in a regulated insurance environment. The team builds and maintains a bespoke analytics data platform that underpins core business functions, including portfolio reporting, actuarial analysis and ad hoc decision support.
The role plays a key part in an ongoing programme of change across a modern Analytics Data Platform. You will own delivery end-to-end, working closely with business stakeholders, while also shaping the long-term technical roadmap using contemporary data engineering practices.
This is a hands-on role requiring strong technical output, sound engineering judgement and the ability to influence how a growing data platform evolves., The Senior Data Engineer is responsible for delivering scalable, secure and maintainable data solutions across a Lakehouse-style architecture. You will design, build and operate data pipelines that transform raw data into high-quality, analysis-ready products, supporting actuarial and business users.
You will apply modern patterns such as the Medallion (Bronze/Silver/Gold) framework, and work extensively with tools including dbt, Airflow, PySpark, Azure Data Factory and Synapse/Microsoft Fabric.
Alongside delivery, you will contribute to the maturity of data engineering practices, helping to raise standards around data quality, automation, documentation and operational excellence., * Deliver data engineering change projects under the direction of a Lead Data Engineer
- Design, build and maintain scalable and secure data pipelines
- Own transformation logic within Lakehouse environments, delivering clean, trusted datasets
- Use PySpark extensively to transform raw data into high-quality analytical products
- Build and operate pipelines using Azure Data Factory, Synapse/Fabric, and cloud data storage
- Apply dbt for transformation and modelling, and Airflow (or similar) for orchestration
- Implement automated data quality checks, monitoring and alerting to support robust DataOps
- Support BAU enhancement and maintenance of existing data products
- Work closely with actuarial and business stakeholders to translate requirements into technical solutions
- Identify platform bottlenecks and continuously improve performance, reliability and simplicity
- Document pipelines, code and processes to ensure maintainability and knowledge transfer
- Collaborate with architects and wider technology teams to align solutions with long-term strategy
- Contribute to a strong engineering culture focused on quality, ownership and accountability
- Stay current with emerging data engineering technologies and best practices
Requirements
- 5+ years' experience in a Data Engineering or similar role
- Strong experience designing and building production-grade data pipelines
- Expert-level proficiency in Python/PySpark
- Hands-on experience with the Microsoft data stack, including Azure Data Factory, Data Lake, Synapse Analytics and/or Microsoft Fabric
- Applied experience using dbt, Airflow, or equivalent tools for transformation and orchestration
- Solid understanding of data modelling, data warehousing concepts and Lakehouse architectures
- Strong grasp of data quality principles and operational best practices
- Comfortable owning delivery end-to-end in a fast-paced, enterprise environment
- Strong communication skills with the ability to influence technical and non-technical stakeholders
- High standards of engineering quality and attention to detail
- Experience mentoring junior engineers is a plus
- Exposure to machine learning frameworks is beneficial but not required
Working Style & Behaviours
- Proactive, accountable and delivery-focused
- Comfortable working autonomously while collaborating closely with others
- Analytical and pragmatic in problem-solving
- Strong ownership mindset with a focus on outcomes
- Values clean design, simplicity and long-term maintainability