Azure Data Engineer (SC Cleared or Eligible) - Permanent - London, UK

Cactus IT Solutions UK Ltd
Charing Cross, United Kingdom
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

Unity
API
Azure
Big Data
Cloud Storage
Information Engineering
Data Governance
ETL
Data Warehousing
Relational Databases
DevOps
Hive
Python
Meta-Data Management
SQL Azure
NoSQL
Power BI
SQL Databases
Data Streaming
Tableau
Azure
Spark
Data Lake
PySpark
Kubernetes
Collibra
Deployment Automation
Kafka
Video Streaming
Data Pipelines
Docker
Jenkins
Databricks

Job description

We are seeking a highly skilled and experienced Senior Data Engineer to contribute to the development and maintenance of a cutting-edge Azure Databricks platform focused on economic data. This platform supports Monetary Analysis, Forecasting, and Modelling activities., * Design, develop, and optimise scalable data pipelines for ingesting and transforming data from multiple sources into Azure Databricks.

  • Implement robust data quality checks, validation rules, and monitoring processes.
  • Develop complex data transformations using Spark (PySpark/Scala).
  • Work extensively with Azure Databricks, Delta Lake, Spark SQL, and Unity Catalog.
  • Build and maintain integrations with APIs, relational databases, and streaming data sources.
  • Support data governance and metadata management using Azure Purview.
  • Collaborate with data scientists, economists, and technical teams to deliver data-driven solutions.
  • Implement CI/CD pipelines and DevOps best practices for deployment automation.

Requirements

The ideal candidate will have strong expertise in data engineering, Azure cloud technologies, Databricks, and large-scale data processing environments., * 10+ years of experience in Data Engineering with 3+ years of hands-on Azure Databricks experience.

  • Strong proficiency in Python, Spark (PySpark), or Scala.
  • Expertise in Azure Data Factory, Azure Blob Storage, Azure SQL Database, and Databricks.
  • Strong understanding of data warehousing, data modelling, and ETL/ELT pipelines.
  • Experience with SQL/NoSQL databases and streaming technologies such as Kafka or Azure Event Hubs.
  • Hands-on experience with Azure Purview for data governance and quality management.
  • Familiarity with DevOps tools such as Azure DevOps, Jenkins, Docker, and Kubernetes.
  • Experience with Tableau or Power BI.
  • Prior experience working in financial services or economic data environments is highly preferred.
  • Relevant Azure certifications are advantageous

Apply for this position