Databricks Data Engineer

Promade Solutions Ltd
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 60K

Job location

Remote

Tech stack

Airflow
Amazon Web Services (AWS)
Azure
Google BigQuery
Cloud Computing
Continuous Integration
Information Engineering
Data Governance
ETL
Data Warehousing
DevOps
Distributed Data Store
Python
SQL Azure
DataOps
SQL Databases
Google Cloud Platform
Azure
Delivery Pipeline
Snowflake
Spark
GIT
Data Lake
Real Time Data
Kafka
Data Management
Machine Learning Operations
Azure
Stream Analytics
Software Version Control
Data Pipelines
Redshift
Databricks

Job description

As Promade Solutions continues to grow and deliver cutting-edge data and analytics solutions to both existing and new customers, we are looking for experienced Databricks Data Engineers who are passionate about building scalable, reliable, and high-performance data platforms., As a Databricks Data Engineer, you will play a key role in designing, developing, and optimising modern data pipelines and lakehouse architectures. You will work closely with analytics, product, and engineering teams to deliver trusted, production-ready datasets that power reporting, advanced analytics, and data-driven decision-making., * Design, build, and maintain scalable ETL/ELT pipelines for batch and streaming data workloads

  • Develop and optimise Databricks Lakehouse solutions using Apache Spark and Delta Lake
  • Design and maintain data models, data warehouses, and lake/lakehouse architectures
  • Implement data quality, validation, observability, and monitoring frameworks
  • Optimise data pipelines for performance, reliability, and cost efficiency
  • Collaborate with cross-functional teams to deliver trusted, production-grade datasets
  • Work extensively with Azure cloud services, including Azure Databricks, Azure Data Factory, Azure SQL DB, Azure Synapse, and Azure Storage
  • Develop and manage stream-processing systems using tools such as Kafka and Azure Stream Analytics
  • Write clean, maintainable Python and SQL code and develop high-quality Databricks notebooks
  • Support CI/CD pipelines, source control, and automated deployments for data workloads
  • Contribute to improving data engineering standards, frameworks, and best practices across the organisation

Requirements

We are looking for engineers with an inquisitive mindset, a strong understanding of data engineering best practices, and a passion for continuous learning. You should be comfortable taking ownership, influencing technical decisions, and contributing ideas as part of a collaborative and growing engineering team.

We value close collaboration over excessive documentation, so strong communication and interpersonal skills are essential. To succeed in this agile and forward-thinking environment, you should have solid experience with Databricks, cloud platforms, and modern data engineering tools and architectures., * 7+ years of experience in Data Engineering roles

  • Strong hands-on experience with Databricks and Apache Spark
  • Mandatory: Databricks Certified Professional credential
  • Excellent proficiency in SQL and Python
  • Strong understanding of distributed data processing, data modelling, and modern data architectures
  • Experience working with cloud data platforms such as Azure Synapse, Snowflake, Redshift, or BigQuery
  • Hands-on experience with batch and streaming data pipelines
  • Experience with orchestration and transformation tools such as Airflow, dbt, or similar
  • Solid understanding of CI/CD, Git, and DevOps practices for data platforms
  • Ability to work autonomously, take ownership, and deliver high-quality solutions
  • Strong communication skills with the ability to explain technical concepts clearly to both technical and non-technical stakeholders

Desirable Skills

  • Experience with real-time data streaming and event-driven architectures
  • Exposure to data governance, security, and access control in cloud environments
  • Experience across multiple cloud platforms (AWS, Azure, GCP)
  • Familiarity with DataOps, MLOps, or analytics engineering practices

Apply for this position