Senior Data Engineer - Databricks
Role details
Job location
Tech stack
Job description
You will work closely with clients and internal stakeholders to translate business requirements into robust technical solutions, ensuring projects are delivered on time, within scope, and aligned with client expectations. You will also contribute to capability growth and technical excellence within the Databricks delivery function., * Lead technical delivery within data engineering squads deployed on client engagements
- Work with internal and client stakeholders to refine business requirements into functional and non-functional specifications
- Build and maintain strong client relationships, supporting commercial objectives and long-term partnerships
- Participate in continuous learning and upskilling, including funded certifications
- Contribute to a collaborative team culture focused on knowledge sharing and technical excellence
Requirements
We are seeking a skilled and experienced Data Engineer to join a growing Databricks-focused data engineering practice. The successful candidate will have a strong background in delivering scalable, cloud-based data solutions, particularly within the Databricks ecosystem.
You will have experience designing, building, and optimising data pipelines and architectures using technologies such as Apache Spark, Delta Lake, and major cloud platforms including AWS, Azure, or GCP. Advanced proficiency in Python and SQL is essential, along with experience integrating CI/CD practices and automated testing into data workflows.
You will play a key role in delivering high-quality data engineering projects to enterprise clients, often collaborating with Databricks' professional services teams. In addition to hands-on technical delivery, you will provide technical leadership within delivery squads, ensuring best practices are followed throughout the project lifecycle., * 4+ years' hands-on experience building and optimising data pipelines, architectures, and large-scale data processing systems
- Strong experience working within cloud platforms such as AWS, Azure, or GCP
Databricks & Spark Expertise
- Strong working knowledge of Databricks and Apache Spark
- Experience developing and optimising data workflows using Delta Lake and Databricks SQL
- Databricks Certified Data Engineer Professional (desirable)
Programming Proficiency
- Advanced proficiency in Python and SQL
- Ability to write clean, efficient, well-documented ETL/ELT code and transformation logic
Cloud & Architecture Knowledge
- Experience designing cloud-native data solutions
- Familiarity with cloud storage (e.g., S3, Azure Data Lake)
- Exposure to cloud data warehouses (e.g., Snowflake, Redshift)
CI/CD & Automation
- Experience implementing CI/CD pipelines for data engineering workflows
- Familiarity with automated testing frameworks for data pipelines
Data Governance & Security
- Understanding of data governance frameworks
- Experience implementing data security and compliance best practices
Collaboration & Leadership
- Experience working cross-functionally with data scientists, BI developers, product owners, and stakeholders
- Proven ability to provide technical leadership within delivery teams
- Strong problem-solving and analytical skills
Client Engagement
- Experience working directly with clients
- Ability to translate business requirements into technical solutions
- Strong communication and presentation skills