Azure Data Engineer
Role details
Job location
Tech stack
Job description
JATO is currently looking to hire a Data Engineer to join our agile teams. As a Data Engineer, you will work in a product delivery team using your data analysis, ETL and cloud technology skills to discover, model, provision and store data at scale in the cloud., * Ingest, store and provision data at scale in the Azure cloud platform using Data Factory or Synapse
- Work closely with colleagues in Operations, Architecture, Delivery, Product, and Software Engineering to collaboratively release platform increments
- Contribute toward solution level architecture and design
- Own your release from development to production following Azure DevOps
- Contribute towards ADF and Synapse pipeline design improvement and best practices and following data engineering design principles
- Understand data issues and able to analyse and work with Architects in addressing design challenges
- Able to migrate data from legacy databases and storage to Azure cloud-based lake and warehouse
- Continuously enhance your knowledge base and skillset
Requirements
Do you have experience in Relational databases?, * Minimum 2+ years' hands-on experience designing and developing complex ETL/ELT pipelines in the cloud (Azure Data Factory and / or Azure Synapse)
- Experience of preparing data mappings based on high level requirements and taking those to a solution level details
- Firm understanding of data warehousing concepts in the lake and traditional database with experience of working end-to-end in a ETL lifecycle from data acquisition to the consumption layer
- Knowledge and experience in value-added areas of automation, scripting
- Knowledgeable about the different types of cloud-based storage resources and distributed systems
- Relational databases (SQL Server, MySQL etc.), NoSQL and writing queries
- Hands-on analytical and data exploration skills
- Experience of working in Agile methodology
- Good knowledge of API-based Integration and Microservices architecture
- Practical experience of improving data quality and efficiency in data pipelines
- Solid understanding of DevOps and CI/CD
- Understanding of compliance and lifecycle of data management
Desirable Skills
- Coding experience in any relevant language (Python, Pyspark etc.)
- Experience working with other Azure technologies such as Fabric, Databricks, Power Automate
- Experience of working with non-relational databases (CosmosDB, MongoDB, etc.) and handling unstructured data in ETL pipelines
Our values. JATO core values are Integrity, People First, Collaboration, Innovation and Excellence.