Data Engineer

Resolutioncontributing
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Tech stack

Amazon Web Services (AWS)
Azure
Cloud Computing
Information Engineering
ETL
Data Mining
Data Systems
Python
Operational Databases
Spark
PySpark
Data Pipelines

Job description

Job DescriptionMid-Level Data Engineer (Contract)Start Date: ASAPContract Duration: 6 MonthsExperience within the Financial Services Industry is a must.If you're a Data Engineer who enjoys building reliable, scalable data pipelines and wants your work to directly support front-office decision-making, this role offers exactly that.You'll join a data engineering function working closely with investment management and front-office stakeholders, helping ensure critical financial data is delivered accurately, efficiently and at scale. This role sits at the intersection of technology, data, and the business, and is ideal for someone who enjoys ownership, delivery, and solving real-world data challenges in a regulated environment.This is a hands-on opportunity for a mid-level engineer who can contribute from day one and take responsibility for production data workflows. What You'll Be DoingBuilding and maintaining end-to-end data pipelines (ETL/ELT) to support analytics and downstream use casesDeveloping scalable data solutions using Python, with a focus on maintainability and performanceWorking with Apache Spark / PySpark to process and transform large datasetsSupporting the ingestion, transformation and validation of complex financial dataImproving the performance, reliability and resilience of existing data workflowsPartnering with engineering, analytics and front-office teams to understand requirements and deliver trusted data assetsTaking ownership of data issues and seeing them through to resolutionContributing ideas that improve data quality, automation, and overall platform efficiency Skills That Will Help You SucceedEssentialCommercial experience as a Data Engineer at a mid-levelStrong Python development skillsHands-on experience with Apache Spark / PySparkSolid experience building ETL/ELT pipelinesBackground within the financial services industry (investment management experience desirable)Comfortable working with production systems in a regulated environmentAble to work independently and deliver in a fast-paced settingNice to HaveExposure to PolarsExperience optimising Spark workloadsCloud data platform experience across AWS, Azure or GCP What Makes This Role AppealingYou'll work on data that directly supports investment and front-office functionsYou'll have ownership of production pipelines, not just isolated tasksYou'll collaborate closely with both technical teams and business stakeholdersYour work will have clear, visible impact on data quality, reliability and decision-makingYou'll join a team that values pragmatic engineering, accountability and continuous improvementInterested? Get in touch to discuss the role in more detail and what success looks like in the first few months.

Requirements

  • Commercial experience as a Data Engineer (mid level)

  • Strong Python skills

  • Hands on Apache Spark / PySpark experience

  • Experience with ETL/ELT and data extraction

  • Background in financial services, ideally investment management

  • Comfortable working in a regulated, production environment Nice to Have

  • Exposure to Polars

  • Experience optimising Spark workloads

  • Cloud data platform experience (AWS, Azure or GCP)

Apply for this position