Lead Analytics Engineer
Role details
Job location
Tech stack
Job description
About the job
Lead Analytics Engineer - Databricks/Data Modeler - London (Hybrid Role)
Location: London - 3 days in the office per week and 2 days working from home
Salary: £80K-£85K base + benefits
Our client is a global company. You will be working within the AI and Data team, using leading tools and platforms such as Databricks and Fabric. As the lead analytics engineer, you will act as the technical architect and strategic partner for the data and AI team. You will lead the design of the analytics layer, ensuring that data is scalable, auditable, and aligned with global standards.
You should have a proven track record in a data engineering role, with a focus on developing data pipelines, data warehouses, data lake houses, and similar data repositories for BI, reporting, and analytics.
Duties & Responsibilities
- Lead data pipeline and semantic model development, orchestration, and maintenance
- Create robust, fit-for-purpose data modelling solutions to address multiple business use cases.
- Perform root cause analysis and proactive investigation to identify potential issues, optimise ETL pipelines, notify end users, and propose appropriate solutions.
- Prepare documentation for future reference.
Software Knowledge
- Experience using Databricks
- Strong SQL and Python skills, including the use of Python concepts in Databricks notebooks for data loading, transformation, and exploration (including familiarity with Pandas)
- Strong understanding of core Databricks and Spark concepts such as Delta Live Tables, Delta Sharing, Medallion Architecture, Data Frames, workflows, Unity Catalog, and UDFs
- Understanding of Databricks metric views, Genie, etc.
- Solid understanding of key principles of data warehouse design, including star schema, fact and dimension design, dimension and fact loading patterns, and SCD and CDC concepts
- Experience using MS DevOps for Scrum management, repositories, and CI/CD pipelines between Azure and DevOps
- Experience using AI development tools such as Cursor
Nice to
- Have an MSc in Data Science or equivalent
- Relevant Azure certifications such as AZ-900, DP-900, DP-203, or DP-500
- Relevant Databricks certifications, such as Lakehouse Data Engineer Associate or Lakehouse Data Engineer Professional
Requirements
You should have a proven track record in a data engineering role, with a focus on developing data pipelines, data warehouses, data lake houses, and similar data repositories for BI, reporting, and analytics., * Experience using Databricks
- Strong SQL and Python skills, including the use of Python concepts in Databricks notebooks for data loading, transformation, and exploration (including familiarity with Pandas)
- Strong understanding of core Databricks and Spark concepts such as Delta Live Tables, Delta Sharing, Medallion Architecture, Data Frames, workflows, Unity Catalog, and UDFs
- Understanding of Databricks metric views, Genie, etc.
- Solid understanding of key principles of data warehouse design, including star schema, fact and dimension design, dimension and fact loading patterns, and SCD and CDC concepts
- Experience using MS DevOps for Scrum management, repositories, and CI/CD pipelines between Azure and DevOps
- Experience using AI development tools such as Cursor
Nice to
- Have an MSc in Data Science or equivalent
- Relevant Azure certifications such as AZ-900, DP-900, DP-203, or DP-500
- Relevant Databricks certifications, such as Lakehouse Data Engineer Associate or Lakehouse Data Engineer Professional