Databricks Data Engineer
Role details
Job location
Tech stack
Job description
As Promade Solutions continues to grow and deliver cutting-edge data and analytics solutions to both existing and new customers, we are looking for experienced Databricks Data Engineers who are passionate about building scalable, reliable, and high-performance data platforms., As a Databricks Data Engineer, you will play a key role in designing, developing, and optimising modern data pipelines and lakehouse architectures. You will work closely with analytics, product, and engineering teams to deliver trusted, production-ready datasets that power reporting, advanced analytics, and data-driven decision-making., * Design, build, and maintain scalable ETL/ELT pipelines for batch and streaming data workloads
- Develop and optimise Databricks Lakehouse solutions using Apache Spark and Delta Lake
- Design and maintain data models, data warehouses, and lake/lakehouse architectures
- Implement data quality, validation, observability, and monitoring frameworks
- Optimise data pipelines for performance, reliability, and cost efficiency
- Collaborate with cross-functional teams to deliver trusted, production-grade datasets
- Work extensively with Azure cloud services, including Azure Databricks, Azure Data Factory, Azure SQL DB, Azure Synapse, and Azure Storage
- Develop and manage stream-processing systems using tools such as Kafka and Azure Stream Analytics
- Write clean, maintainable Python and SQL code and develop high-quality Databricks notebooks
- Support CI/CD pipelines, source control, and automated deployments for data workloads
- Contribute to improving data engineering standards, frameworks, and best practices across the organisation
Requirements
We are looking for engineers with an inquisitive mindset, a strong understanding of data engineering best practices, and a passion for continuous learning. You should be comfortable taking ownership, influencing technical decisions, and contributing ideas as part of a collaborative and growing engineering team.
We value close collaboration over excessive documentation, so strong communication and interpersonal skills are essential. To succeed in this agile and forward-thinking environment, you should have solid experience with Databricks, cloud platforms, and modern data engineering tools and architectures., * 7+ years of experience in Data Engineering roles
- Strong hands-on experience with Databricks and Apache Spark
- Mandatory: Databricks Certified Professional credential
- Excellent proficiency in SQL and Python
- Strong understanding of distributed data processing, data modelling, and modern data architectures
- Experience working with cloud data platforms such as Azure Synapse, Snowflake, Redshift, or BigQuery
- Hands-on experience with batch and streaming data pipelines
- Experience with orchestration and transformation tools such as Airflow, dbt, or similar
- Solid understanding of CI/CD, Git, and DevOps practices for data platforms
- Ability to work autonomously, take ownership, and deliver high-quality solutions
- Strong communication skills with the ability to explain technical concepts clearly to both technical and non-technical stakeholders
Desirable Skills
- Experience with real-time data streaming and event-driven architectures
- Exposure to data governance, security, and access control in cloud environments
- Experience across multiple cloud platforms (AWS, Azure, GCP)
- Familiarity with DataOps, MLOps, or analytics engineering practices