Senior Data Engineer

My Money Matters
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 60K

Job location

Charing Cross, United Kingdom

Tech stack

Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Azure
Google BigQuery
Data Architecture
Data Validation
Information Engineering
Data Governance
Data Infrastructure
Data Integration
Data Integrity
ETL
Data Security
Data Systems
Data Warehousing
IBM Cognos Business Intelligence
Python
Machine Learning
Performance Tuning
Power BI
Software Engineering
SQL Databases
Tableau
Talend
Data Processing
Google Cloud Platform
Data Storage Technologies
Microsoft Power Automate
Snowflake
Zapier
Data Strategy
Data Lake
Information Technology
Performance Monitor
Data Management
Data Delivery
Data Pipelines
Redshift
Databricks
Programming Languages

Job description

We are looking for an experienced Data Engineer to join our technology team and support the development of scalable data pipelines and infrastructure within our Databricks environment. Reporting directly to the Business Intelligence Manager, this role will act as the SME for Data Engineering and work closely with teams across the organisation to ensure efficient, reliable data delivery.

The Data Engineer will be responsible for designing and maintaining robust data architectures, optimising data processing workflows, and ensuring high standards of data integrity, accessibility, and governance. This role will play a key part in enabling high-quality analytics and supporting the business with trusted, well-structured data.

In addition, the Data Engineer will help drive the organisation's data strategy forward, including preparing the data platform for AI use cases and the adoption of tools such as Genie AI in Databricks. Working alongside stakeholders, you will identify opportunities for innovation through AI, automation, and modern data engineering best practices, helping shape the future of data management and capability across the business.

Key Responsibilities

Data Architecture and Engineering:

  • Design and implement scalable, efficient data pipelines and infrastructure to support business intelligence and analytics initiatives (Databricks, Power Automate).
  • Lead data engineering to build robust, high-performance data systems that align with business objectives as well as enabling fit for GenAI and Machine Learning datasets.
  • Collaborate with stakeholders across the organization to understand data requirements and ensure data solutions meet business needs and serve appropriate data.
  • Data Pipeline and Workflow Management:
  • Oversee the development, optimization, and maintenance of data pipelines, ensuring data is collected, processed, and made available for analysis in a timely and accurate manner.
  • Ensure data quality, integrity, and governance across all data systems by implementing best practices for data validation, security, and privacy (Unity Catalog in Databricks).
  • Develop and maintain ETL processes to integrate data from various sources into centralized data warehouses and data lakes.

Cross-functional Collaboration:

  • Partner with analysts and business teams to design data architectures that enable effective reporting, analysis, and decision-making.
  • Act as the primary point of contact for data engineering, ensuring smooth communication between technical teams and business stakeholders.
  • Translate business needs into technical specifications, ensuring the data infrastructure supports both current and future analytics requirements.

Performance Monitoring and Optimization:

  • Monitor and optimize the performance of data systems and pipelines, ensuring they meet service level agreements (SLAs) and business expectations.
  • Continuously evaluate and implement new technologies and tools to enhance data processing capabilities and improve overall system performance (Zapier, Genie AI).

Process Improvement and Innovation:

  • Continuously identify areas for process improvements and implement automation to enhance the efficiency and scalability of data workflows.
  • Ensure data storage and processing solutions are optimized for cost and performance, adapting to evolving business needs.

Requirements

  • Desirable Bachelor's or Master's degree in Computer Science, Data Engineering, Software Engineering, or a related field.

Experience:

  • 5+ years of experience in data engineering, data warehousing, or a similar role.
  • Proven experience in designing, building, and maintaining large-scale data systems and workflows in cloud environments.

Technical Skills:

  • Expertise in SQL and Python, or other programming languages used for data processing and pipeline development.
  • Strong knowledge of data warehousing solutions (e.g., Snowflake, Databricks, Redshift, BigQuery) and cloud platforms (e.g. AWS, GCP, Azure).
  • Experience with ETL tools, data integration platforms, and data pipeline orchestration tools (e.g. Apache Airflow, Talend, dbt).
  • Knowledge of reporting and Business Intelligence Tools (e.g. Power BI, Tableau, Cognos)
  • Familiarity with data governance principles, data security, and privacy standards.

Analytical and Problem-Solving Skills:

  • Excellent problem-solving skills with the ability to design innovative solutions for complex data challenges.
  • Strong ability to troubleshoot data issues, identify root causes, and implement effective resolutions.
  • Experience in performance tuning and optimizing data systems for scalability and efficiency.

Communication and Interpersonal Skills:

  • Strong communication skills with the ability to present complex technical concepts to both technical and non-technical audiences.
  • Proven time management skills, able to effectively manage workload independently.
  • Ability to work effectively with cross-functional teams and handle multiple projects simultaneously in a fast-paced environment.

Apply for this position