Senior Data / AI Engineer (Python, Docker, AWS, LLMs) - 9-month contract
Role details
Job location
Tech stack
Job description
We are looking for a Senior Data / AI Engineer to lead the development of advanced AI capabilities within a centralised financial data platform. This role sits at the intersection of data engineering and machine learning, focusing on building intelligent, AI-powered systems that transform complex financial data into actionable insights.
You will work within an AWS-based environment to design scalable pipelines and implement cutting-edge LLM-driven solutions, enabling users to query and interact with both structured and unstructured data in intuitive ways.
Responsibilities
- Design and build robust Python pipelines to ingest, clean, transform, and chunk structured and unstructured data for vector embedding and storage
- Architect and manage cloud-native solutions using AWS services such as Bedrock, SageMaker, OpenSearch, S3, and Glue to support AI/ML workloads
- Develop, containerise, and deploy AI services using Docker, ensuring scalability, portability, and reproducibility
- Design and implement Retrieval-Augmented Generation (RAG) architectures to power intelligent search, summarisation, and question-answering capabilities
- Partner with Principal Data Engineers, Frontend Developers, and other stakeholders to integrate AI-driven features into the core platform
- Monitor, evaluate, and optimise LLM usage, focusing on performance, latency, scalability, and cost-efficiency
Requirements
Do you have experience in TypeScript?, Do you have a Master's degree?, * Expert-level proficiency in Python for data engineering and backend development
- Hands-on experience building applications with large language models (e.g., GPT, Llama, Anthropic) and orchestration frameworks such as LangChain or LlamaIndex
- Strong experience with AWS services, particularly those supporting AI/ML and data pipelines (e.g., Bedrock, SageMaker, Glue, S3)
- Practical experience implementing and managing vector stores (e.g., OpenSearch, Pinecone, Weaviate, or pgvector) for semantic search
- Proven experience containerising applications for deployment in scalable environments
Nice to have
- Experience building ETL/ELT pipelines for machine learning workflows
- Familiarity with complex financial data domains, including discounting data (e.g., BroCalc), DSO, Energy & Commodities (FACTS), and cash/credit product data
- Basic understanding of frontend integration, particularly how AI APIs connect with modern frameworks such as Next.js and TypeScript
- Experience with workflow orchestration tools such as Apache Airflow
Benefits & conditions
Pulled from the full job description
- Referral programme
- Company pension
- Private medical insurance
- Cycle to work scheme, * Private Healthcare Package
- Pension
- Employee Assistance Programme
- Enhanced Maternity policy
- Group Life Protection Benefit
- Give as You Earn
- Cycle to Work Scheme
- Employee Referral Bonus Scheme
- Diversity Networks
- Access to a range of skills and certifications