Data Engineer
Role details
Job location
Tech stack
Job description
Data Engineers are responsible for the development, performance, quality, and scaling of the company's data pipelines, with a special focus on data quality. The incumbent will take part in the execution of technical tasks within the scope of data management. As part of a team, the data engineer builds internal tools and infrastructure for other teams, or directly contributes to building user-facing products. In this role, you will be expected to operate with independence across core architectural and deployment tasks, while applying hands-on experience in building reliable software and data solutions. You will also play a key role in advancing a "Data as a Product" mindset, ensuring datasets are robust enough to power the next generation of autonomous and programmatic systems., Data Architecture & Modelling
- Independently segment data assets into sustainable and business-enabling domains.
- Create physical data models to meet business requirements and autonomously map data flows between systems and workflows.
- Support the definition of data architecture requirements and processing methodologies.
Data as a Product & Advanced Consumption
- Design, build, and maintain well-managed, unified data solutions treated as standalone products.
- Engineer highly reliable, high-quality data assets optimised for programmatic consumption via Model Context Protocols (MCPs) and autonomous AI agents.
- Build extensible data pipelines spanning different data encodings to support advanced business requirements.
Data Solution Build & System Ownership
- Independently deploy code to production with full end-to-end system ownership.
- Implement scalable tooling for data flow automation, efficient ingestion, and batch/event-based streaming.
- Monitor SLIs and SLOs, observability, application monitoring, and engineer for failure.
Data Quality & Governance
- Support and implement data validation solutions for values and schemas.
- Monitor data availability, timeliness, completeness, and failure detection.
- Ensure compliance with regulatory, risk, and governance requirements through proper controls along data flows.
Software Engineering Best Practices
- Write, refactor, and maintain high-quality code and tooling.
- Apply KISS, SOLID, and DRY principles with strong documentation and test automation.
Requirements
- Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
- Proven independence in data architecture, physical data modelling, and production deployments.
- Hands-on experience with SLIs/SLOs, test automation, and scalable data flow tooling.
- Strong interest in structuring data for LLMs, MCPs, and agentic workflows., * Core Data & Semantic Layer: Advanced SQL and Python, deep expertise in Snowflake data modelling and semantic layers.
- AI & Programmatic Enablement: Practical interest or experience with MCPs and agent mesh architectures.
- Governance & Security: Experience with IAM, data access control, and governance platforms such as Immuta.
- DevEx & Consumption: CI/CD, API development, containerisation, and serving data to BI tools and interactive applications (Streamlit, Tableau).