Data Engineer
Role details
Job location
Tech stack
Job description
We are looking for trusted data professionals who are at home delivering data value for our clients through our triple A approach to data. You will specialize in fast execution of data projects from concept to delivery.
Requirements
You should be innovative and data-centric, have a passion for data and proven history working with associated data technologies and love working in fast-paced agile environments. We look for Data Engineers who are keen to learn, progress and succeed whilst balancing the needs and expectations of our customers.
We would love to hear from people who live and breathe our core values of trust, agility, efficiency, innovation, and are committed to developing themselves and our teams. You should be willing to share skills, knowledge, and experiences internally and within the public communities.
What you will do
Within the Data Engineer role, we are looking for a solid understanding of the Microsoft Intelligent Data Platform stack and for you to know SQL inside out. You will be working as part of a close-knit team on various delivery projects for clients so you should possess the technical ability to code, test and implement data solutions within our required associated technologies to help assess, accelerate, and amplify value to the customer. You should be a self-starter, problem solver, and be capable of working within a fast-paced team. You should be able to relay technical information to non-technical audiences.
How you will do it
We appreciate technologies are always evolving, but we would like to hear from you if you have proven hands-on knowledge of the following technologies :
- Databricks
- Dedicated SQL Pools
- Synapse Analytics
- Data Factory
To set yourself up for success you should have in-depth knowledge of the languages Apache Spark, SQL, Python, along with solid development practices. Additionally, you will be required to have in-depth knowledge of supporting Azure platforms such as Data Lake, Key Vault, DevOps, within Databricks your experiences should include Delta, Unity Catalog, in-depth Cluster knowledge along with performance tuning and debugging.