Data Engineer (Azure, Databricks), (Remote) - International organisation
Role details
Job location
Tech stack
Job description
We are seeking a highly skilled Data Engineer to join our team, contributing to the design, development, and optimization of data solutions within cloud-based and distributed environments. The ideal candidate will have hands-on experience with Azure Cloud, Databricks, and PySpark, complemented by strong proficiency in SQL, Python, and modern data engineering tools. This role offers the opportunity to work on advanced data integration, analytics, and machine learning projects, ensuring efficient data processing, automation, and deployment of scalable solutions.
Requirements
Do you have experience in Shell Scripting?, The Data Engineer demonstrates strong analytical and technical capabilities, with the ability to design and implement efficient data architectures and pipelines in cloud environments. They possess solid programming skills in Python and SQL, combined with expertise in Azure and Databricks platforms. The role requires a deep understanding of distributed computing, data modelling, and ETL processes, as well as practical experience with machine learning frameworks and DevOps practices, including CI/CD, version control, and containerization. Strong problem-solving, collaboration, and communication skills are essential to translate business needs into scalable, high-quality data solutions.
IT skills:
-
Microsoft Azure Cloud (including Azure DevOps).
-
Databricks, PySpark, DBT.
-
Python and SQL.
-
Pandas, Numpy.
-
Machine learning frameworks such as: Scikit-learn, PyTorch, Keras.
-
Git.
-
Docker, Kubernetes.
-
Bash, Python scripting.
-
Attunity.
-
Continuous Integration / Continuous Deployment (CI/CD).
-
Python Poetry, Databricks notebooks.
-
Distributed computation, software design patterns, prompt engineering, data exploration and analysis, model deployment and monitoring.
Language:
- English (C1).