AI Engineer
Role details
Job location
Tech stack
Job description
The AI Engineer is responsible for designing, building, and deploying production-grade AI solutions. Unlike a traditional data scientist, your focus will be on the engineering lifecycle-transforming ML and AI solutions into scalable products, managing automated retraining loops, and ensuring the robustness of our AI infrastructure within the Azure and Databricks environment., * Architect & Deploy: Lead the transition of machine learning models from notebooks to production-ready services using Azure Cloud and Databricks. The goal is to provide Data Scientist teams with everything they need to support them throughout the project lifecycle and generate common templates, pipelines, and libraries to accelerate development.
- MLOps Excellence: Build and maintain CI/CD pipelines for machine learning and AI to automate model deployment, versioning, and monitoring.
- Collaborative Integration: Act as the bridge between the Data Scientist and the Platform Engineers to design scalable and compliance solutions for the different markets.
- Operational Optimization and Governance: Identify and implement data-driven improvements to operational workflows across different countries, ensuring localized models scale globally.
- Infrastructure Management: Manage the ML lifecycle using MLflow in Databricks for tracking experiments and managing model registry.
Requirements
- Education: Grade or master's degree in computer science, Software Engineering, or a highly quantitative field.
- Platform Expertise: 3+ years of hands-on experience with Azure Cloud ecosystem (Azure DevOps, Azure Kubernetes Services, Azure Pipelines, Azure Foundry…). Experience with Databricks will be highly valued.
- MLOps Tooling: Proven experience with MLflow, Docker, and Kubernetes or any other MLOps toolset for containerising and orchestrating ML workloads.
- Programming: High proficiency in Python (specifically for production-grade code, not just scripting) and SQL.
- Engineering Mindset: Deep understanding of software development patterns, API design (FastAPI/Flask), and unit testing for ML components.
- Model Deployment: Strong knowledge of real-time vs. batch inference patterns and how to monitor for model drift and data skew in production.
- Additional Data Engineering knowledge: Experience in data engineering on Databricks, with PySpark, SQL, or experience in data architecture (Medallion, ETL/ELT, data processing and transformation) will be a plus.
- Experience with orchestrators such as Airflow or Windmill will also be viewed favourably.
What we offer:
- A challenging, exciting and developing position in an international company with growth ambitions.
- Highly motivated and skilled co-workers globally.
- Being part of a supportive culture where effort counts.
- We facilitate your learning through different learning arenas.