Sr. Azure Data Engineer

Technopride Ltd
Manchester, United Kingdom
2 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 104K

Job location

Manchester, United Kingdom

Tech stack

Data analysis
Azure
Computer Programming
Continuous Integration
Information Engineering
Data Systems
DevOps
Distributed Data Store
Distributed Systems
Job Scheduling
Python
Metadata
Package Management Systems
Performance Tuning
Release Management
Software Deployment
Data Logging
Data Processing
Data Ingestion
Azure
Spark
GIT
Pandas
PySpark
Deployment Automation
Data Management
Data Pipelines
Serverless Computing
Databricks

Job description

  • Design, develop, and maintain metadata-driven data pipelines using Azure Data Factory (ADF) and Databricks.
  • Build and implement end-to-end metadata frameworks that promote scalability, reusability, and standardization.
  • Develop and optimize large-scale data processing workflows using PySpark, SparkSQL, and Pandas.
  • Collaborate with architecture, analytics, and platform teams to integrate data solutions into enterprise data platforms.
  • Implement and manage CI/CD pipelines for automated build, testing, and deployment of data engineering solutions.
  • Ensure data quality, governance, security, and compliance with defined organizational standards.
  • Apply best practices for observability, monitoring, logging, and alerting across data pipelines.
  • Provide technical leadership and take full ownership of assigned data engineering initiatives, from design through production support.
  • Troubleshoot and optimize pipeline performance, reliability, and cost efficiency.

Requirements

Do you have experience in Software deployment?, We are seeking an experienced Senior Data Engineer with strong expertise in Azure-based data engineering and metadata-driven architectures. The ideal candidate will design, build, and own scalable data pipelines and frameworks using Azure Data Factory and Databricks, while ensuring high standards of data quality, automation, and operational excellence. This role requires deep technical hands-on skills, strong DevOps understanding, and the ability to lead complex data engineering initiatives end to end., * Azure Data Factory (ADF): Strong expertise in designing, building, and orchestrating complex data pipelines.

  • Azure Databricks: Hands-on experience with notebooks, clusters (including job and serverless clusters), job scheduling, and Databricks Asset Bundles.
  • PySpark / SparkSQL: Strong knowledge of distributed data processing, performance tuning, watermarking, and incremental data processing patterns.
  • Pandas: Advanced data manipulation and transformation capabilities.
  • Metadata-driven architecture: Proven experience designing and implementing metadata frameworks for data ingestion and processing.
  • CI/CD & DevOps: Experience using tools such as Azure DevOps, Git, and automated deployment pipelines.
  • Programming: Proficiency in Python (including package management and build artifacts such as wheels) and/or Scala.
  • Observability: Experience implementing monitoring, logging, and alerting for data pipelines and distributed systems., * Strong understanding of data protection, security, and compliance considerations in cloud-based data platforms.
  • Solid grasp of DevOps practices, automation, and release management for data engineering workloads.
  • Excellent analytical, problem-solving, and troubleshooting skills.
  • Ability to work independently while collaborating effectively with cross-functional teams.
  • Strong communication skills with the ability to explain complex technical concepts clearly.

Apply for this position