Senior Data Engineer (Fabric)
Role details
Job location
Tech stack
Job description
At FGH, data is not a reporting afterthought - it is a strategic enabler for growth, efficiency, and AI-driven decision-making. As a Senior Data Engineer, you will play a pivotal role in transforming how data is designed, delivered, and consumed across a digital retail organisation, shaping and operationalising our modern data and AI platform, working at the intersection of architecture, engineering, and innovation.
This role offers the opportunity to build at scale, influence platform direction, and directly enable advanced analytics and AI use cases across the business.
You'll work closely with our Data Architect, BI teams, and crossfunctional data product owners to build a nextgeneration data platform using Microsoft Fabric, Lakehouse architecture.
Accountabilities
Solution Delivery
- You will be working with the FGH business to design & build end-to-end ETL data solutions using Microsoft Fabric as well as real-time data processing.
- You will design and build data pipelines using Data Flows (Gen2) and Fabric Notebooks (Spark SQL & Python) to ingest and transform data from on-prem and third-party data solutions.
- Design and develop Data Lakehouse's using Medallion architecture standards.
- Implement Sematic Layers usings Star Schema Modelling (Kimbal) & DAX in collaboration with the BI Team/Lead.
- Deployment of versioned artifacts using DevOps CI/CD.
- Support data product teams with reusable components and infrastructure.
- Optimise data storage and retrieval for analytics, AI, and operational use cases.
Data Governance & Compliance
- Embed data quality, lineage, and observability into all pipelines.
- Support metadata management and data cataloguing initiatives.
- Ensure compliance with data protection standards (e.g., GDPR, ISO 27001).
- Collaborate with InfoSec and Risk teams to implement secure data handling practices.
Data Integration & Management
- Integrate data from internal and third-party sources (e.g., CRM, ERP, APIs).
- Ensure consistency, interoperability, and performance across data flows.
- Monitor and troubleshoot pipeline health and data reliability.
- Support real-time and batch processing environments.
Best Practice Expectations
- Apply engineering principles that support modularity, scalability, and resilience.
- Automate deployment, testing, and monitoring of data pipelines.
- Contribute to platform sustainability and energy efficiency.
- Align engineering practices with enterprise architecture and business goals.
Relationship Management
- Collaborate with data architects, analysts, and business stakeholders.
- Engage with platform teams to ensure infrastructure readiness.
- Support data product teams with technical enablement and onboarding.
- Evaluate and manage third-party data tools and services.
Personal & Professional Development
- Stay current with emerging data engineering tools and cloud services.
- Pursue relevant certifications and continuous learning (Fabric Data Engineer -DP-700).
- Contribute to knowledge sharing and mentoring within the data community.
- Promote a culture of data reliability, automation, and innovation., Turning your job into a career is a real passion for us and our development programmes will enable you to grow in role.
We offer clear career pathways that will show you the way, outlining the skills, behaviours and knowledge needed to perform at the next step.
We invest in our colleagues, giving them all the opportunity to progress. Our inspired leadership programme is aimed at equipping our future leaders to coach, develop, manage change and maintain situational awareness.
Requirements
Do you have experience in Spark?, Essential
- Able to commute to Bradford City Centre
- A relevant computer Degree or Microsoft Certified, e.g. DP-700: Designing and Implementing a Data Analytics Solution Using Microsoft Fabric
- Evidence of formal training, certification or several years of experience in SQL, Python, or Spark. Familiarity with data mapping frameworks.
Engineering & Technical Skills
- Data Pipeline Development: Proven experience designing and building scalable ETL/ELT pipelines.
- Data Platform: Direct experience working with the Microsoft Fabric Platform (Data flow Gen2, Notebooks & Sematic Models) and storage solutions (e.g., Data Lake, Delta Lake).
- Programming & Scripting: Proficiency in SQL and Python for data manipulation, transformation, and automation; familiarity with Spark-SQL
- Data Integration: Experience integrating structured and unstructured data from diverse sources including APIs, flat files, databases, and third-party platforms (e.g., CRM, ERP).
- Data Observability & Quality: Ability to implement monitoring, logging, and alerting for data pipelines.
- Ability to translate architectural designs into operational data solutions., We offer a range of hybrid and flexible working options to help you achieve a healthy work life balance. Our full time head office colleagues work a minimum of 2 days per week in the office, allowing the perfect balance between collaborative in-person team work and the flexibility to work from home.
Benefits & conditions
We firmly believe that we should reward our brilliant people with extensive benefits to help them stay healthy, relax and re-energise, have fun, manage the day-to-day and plan for the future. Here are just some of our great benefits:
- Competitive salaries and annual bonus scheme
- 37 days holiday
- Healthcare cash plan
- Competitive pension scheme
- Life assurance
- Paid paternity and maternity leave
- Incredible staff discounts
- Subsidised Canteen