Data Engineer
Role details
Job location
Tech stack
Job description
We're looking for a proactive and curious Data Engineer to join our growing Data Services function. You'll play a key role in shaping, developing, and maintaining our modern cloud-based Data & Analytics platform-helping us unlock powerful insights and create meaningful value for colleagues and customers. What you'll be doing Data Engineering & Pipeline Development:
- Design, build and test data solutions to prepare structured and unstructured data
- Develop automated, efficient pipelines for regulatory, analytics and warehousing use
- Ensure solutions are scalable, reliable, and include built-in monitoring, alerting and error handling
- Take ownership of data pipelines from development through to production support
Data Analysis, Profiling & Modelling:
- Conduct data profiling and source-system analysis
- Integrate multiple data sources into conformed models for analysis (e.g. star schema, conformed dimensions)
- Implement and manage Slowly Changing Dimensions (Type 2) where required
- Help colleagues access transparent and trustworthy insights
Contribute to Our DataOps Culture:
- Champion best practices in automation, testing, and operational excellence
- Work with metadata, lineage and governance frameworks to improve data visibility and control
- Help ensure our services run efficiently with clear metrics, service performance monitoring, and continuous improvement
Tools, Platforms & Technology Ownership:
- Support onboarding and monitoring of data processes across environments
- Drive improvements in data quality, reliability and platform performance
- Help align technology roadmaps with business and IT strategy
Who you'll work with You'll collaborate closely with:
- Data Platform Lead - sharing updates to support informed decisions
- Senior Data Engineer - receiving direction, coaching and feedback
- Data Platform Team & Data Services Teams - working together to deliver high-quality solutions
Requirements
- Cloud technologies such as Microsoft Fabric (preferred), Azure Synapse, Databricks, Snowflake or similar
- Data integration and big-data modelling (medallion architecture, dimensional modelling)
- API, batch or streaming pipelines
- Languages such as Python, R, SQL, or tools like SSIS
- Data quality tools (e.g., Experian Pandora or similar)
- Metadata management / data lineage tools (e.g., Collibra, Informatica)
- Working with varied datasets-customer, transactional, digital, financial, etc.
Experience:
- Practical experience with data automation and DataOps principles
- Understanding of legacy and modern data ecosystems
- Experience with automated testing, including regression testing
- Experience troubleshooting data issues and conducting root-cause analysis, * Strong attention to detail
- Adaptability and willingness to learn new tools
- Innovative thinking and continuous improvement mindset
- Clear communication skills, especially when explaining technical topics
- Collaborative approach and accountability for delivery
If you enjoy solving complex data challenges, building robust data pipelines, and collaborating across teams to deliver high-quality outcomes, we'd love to hear from you.