Data Engineer (Data Modeling)

Hays plc
Cambridge, United Kingdom
2 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Shift work
Languages
English

Job location

Cambridge, United Kingdom

Tech stack

Agile Methodologies
JIRA
Azure
Business Systems
Software Documentation
Continuous Integration
Data Validation
Data Governance
ETL
Data Transformation
Data Systems
Iterative and Incremental Development
Python
Performance Tuning
Power BI
SQL Databases
GIT
Data Lineage
Software Version Control
Data Pipelines
Dynatrace
ServiceNow
Databricks

Job description

  • Design, build, and maintain scalable ETL pipelines in Databricks to integrate data from multiple business systems such as ServiceNow, JIRA, and ADO.
  • Optimise data workflows for performance, scalability, and reliability.
  • Implement data validation and quality checks to ensure trustworthy reporting in downstream tools such as Power BI.
  • Create data modelling and schema design at Back End for analytics and reporting.
  • Collaborate with the Staff Data Engineer and Visualisation Developer to align technical delivery with reporting needs.
  • Manage code in Git and support CI/CD processes for Databricks deployments.
  • Contribute to data lineage documentation, standards, and governance best practices.

Requirements

We are seeking a motivated and detail-oriented Data Engineer with a passion for designing and delivering high-quality data solutions in Databricks. You will be responsible for building and optimising data pipelines that form the foundation of our reporting and analytics ecosystem.

Using your technical expertise, you will design and maintain efficient ETL processes to integrate data from systems such as ServiceNow, JIRA, and Dynatrace, ensuring accuracy, performance, and scalability. You will thrive in a fast-paced, collaborative environment and be eager to learn, innovate, and continuously improve our data platform.If you are passionate about building reliable data foundations and enabling insights that drive smarter decisions - this is the ideal role for you!, * Proven experience in building and maintaining ETL pipelines using Databricks.

  • End to end Data modelling experience.
  • Strong knowledge of SQL and Python for data transformation and automation.
  • Proven experience in data modelling, schema design, and data quality validation.
  • Solid understanding of data performance tuning and pipeline optimisation in Databricks.
  • Experience working with Git-based version control and collaborative development workflows.
  • Strong analytical and problem-solving skills, with an eye for efficiency and accuracy.
  • Excellent communication and collaboration skills, comfortable working with both technical and non-technical partners.

"Nice To Have" Skills and Experience:

  • Familiarity with Agile delivery methods and iterative development practices!
  • Knowledge of data governance and data lineage documentation standards.
  • Exposure to automation and CI/CD frameworks within Databricks.

Apply for this position