Senior Data Engineer - Azure Databricks
Falcon Chase International
Charing Cross, United Kingdom
yesterday
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Charing Cross, United Kingdom
Tech stack
JavaScript
API
Agile Methodologies
Azure
Big Data
Cloud Engineering
Cloud Storage
Code Review
Databases
Information Engineering
Data Governance
Data Integration
Data Transformation
Data Warehousing
Relational Databases
DevOps
Electronic Data Interchange (EDI)
R
Python
Meta-Data Management
SQL Azure
NoSQL
Power BI
SQL Databases
Data Streaming
Tableau
YAML
Data Processing
Azure
Spark
GIT
Data Lake
PySpark
Kubernetes
Data Lineage
Collibra
Deployment Automation
Kafka
Video Streaming
Data Pipelines
Docker
Jenkins
Databricks
Job description
We are looking for a highly skilled Senior Data Engineer to help build and enhance our Azure Databricks platform that powers economic data for Monetary Analysis, Forecasting and Modelling. This role focuses on designing scalable data pipelines, implementing complex data transformations, and ensuring high data quality and reliability across large datasets in Azure., Data Pipeline Development & Optimisation
- Design, build and maintain scalable pipelines to ingest and process data from APIs, databases and financial data providers.
- Optimise pipelines for performance, reliability and cost efficiency.
- Implement automated data quality checks and validation rules.
Data Transformation & Processing
- Develop complex data transformations using Spark (PySpark/Scala).
- Build data processing logic for cleansing, enrichment and aggregation.
- Ensure accuracy and consistency across the data life cycle.
Azure Databricks & Cloud Engineering
- Work extensively with Azure Databricks, Unity Catalog and Delta Lake.
- Optimise Databricks workloads for performance and cost.
- Develop using SQL, Python, R, YAML and JavaScript.
Data Integration
- Integrate data from relational databases, APIs and streaming sources.
- Implement best-practice data integration patterns.
- Collaborate with API teams to ensure seamless data exchange.
Data Quality & Governance
- Implement data governance and quality processes using Azure Purview.
- Enable data lineage tracking and metadata management.
- Ensure compliance with governance policies and standards.
Collaboration & Communication
- Work closely with data scientists, economists and technical teams.
- Translate business requirements into technical solutions.
- Participate in code reviews and knowledge-sharing sessions.
Automation & DevOps
- Build CI/CD pipelines and automate deployments.
- Apply DevOps best practices for data engineering workflows.
- Collaborate with DevOps teams on environment deployments.
Requirements
- 8+ years of Data Engineering experience, including 3+ years with Azure Databricks.
- Strong Python and Spark (PySpark or Scala) expertise.
- Deep knowledge of data warehousing, modelling and integration patterns.
- Experience with Azure Data Factory, Azure Blob Storage and Azure SQL.
- Strong experience working with large and complex datasets.
- Expertise in Databricks implementation and optimisation.
- Experience with SQL and NoSQL databases.
- Knowledge of data governance and data quality frameworks.
- Experience with Git and Agile methodologies.
- Strong communication and problem-solving skills.
Nice to Have
- Streaming technologies (Kafka, Azure Event Hubs).
- Data visualisation tools (Power BI, Tableau).
- DevOps tools (Azure DevOps, Jenkins, Docker, Kubernetes).
- Financial services or economic data experience.
- Azure Data Engineer certifications.