AI Engineer
Role details
Job location
Tech stack
Job description
As an AI Engineer in Madrid, you'll play a key role in shaping tomorrow's personal assistant technology. Your daily work will involve crafting breakthrough prototypes that redefine how users interact with digital assistants while collaborating closely with talented teams across disciplines. You'll automate intricate tasks using both classic machine learning approaches and modern LLM frameworks to create seamless synergy between human expertise and machine intelligence. By enhancing information retrieval systems and building smart monitoring tools, you'll directly contribute to raising operational standards. You'll also optimise annotation flows for superior data quality and leverage proprietary datasets with advanced techniques. Building robust APIs and backend services will be central to delivering scalable solutions that set new benchmarks for personalised assistance.
- Design imaginative AI solutions and develop prototypes that push the boundaries of user experiences in the personal assistant industry.
- Harness the newest breakthroughs in artificial intelligence and machine learning to build highly personalised recommendation systems that captivate users every time.
- Create tools that empower human personal assistants (copilots), ensuring smooth collaboration between people and machines for maximum impact.
- Automate complex tasks using a full spectrum of AI technologies-from traditional machine learning models to innovative LLM-agent frameworks-making processes smarter and faster.
- Elevate information retrieval systems with advanced search techniques and Retrieval-Augmented Generation (RAG) methods to boost task quality for human assistants.
- Drive operational excellence by developing intelligent monitoring and intervention systems that guarantee top-tier service delivery.
- Advance the AI operations stack by designing robust evaluation frameworks for prompt iteration, model improvement, and agile experimentation.
- Streamline human annotation workflows to maximise resource efficiency for annotation, model comparison, and golden dataset generation.
- Unlock the full potential of proprietary data using leading-edge techniques like fine-tuning, reinforcement learning, and RAG methodologies.
- Architect, build, and maintain APIs, databases, and backend services that are essential for delivering scalable AI solutions.
Requirements
Your background as an AI Engineer means you bring a wealth of technical know-how and practical experience across diverse machine learning domains. You're skilled at Python programming with advanced SQL abilities for effective data handling. Your experience deploying ML models-both online via APIs and offline through batch jobs-enables you to deliver reliable production-ready solutions. Managing MLOps stacks ensures robust experiment tracking and real-time model monitoring. Your sharp product sense helps align technical innovations with strategic business goals while your knowledge of foundational LLM technologies supports prompt engineering excellence. Additional strengths such as understanding embeddings/RAG systems or prior work with backend frameworks further enhance your ability to make a meaningful impact within collaborative teams focused on redefining personal assistance.
- At least four years of hands-on experience in machine learning engineering roles within rapidly evolving environments.
- Expert-level proficiency in machine learning concepts such as Generalised Linear Models, Gradient Boosting Machines, Deep Learning architectures, and Probabilistic Modelling; specialised knowledge in recommender systems is highly desirable.
- Advanced engineering skills in Python programming paired with strong data manipulation abilities using SQL databases.
- Comprehensive understanding of ML model deployment patterns for both online API-based solutions and offline batch processing jobs; ability to design production-grade systems is essential.
- Practical experience with MLOps stacks focused on experiment tracking, performance monitoring, and maintaining models in live production settings.
- Exceptional product intuition combined with business acumen; deep insight into what makes machine learning projects successful from concept through implementation.
- Solid grasp of foundational LLM technologies including open-source/closed-source models, prompt engineering strategies, evaluation frameworks, and function calling mechanisms.
- Nice to have: In-depth understanding of embeddings and Retrieval-Augmented Generation (RAG) systems for enhanced information retrieval capabilities.
- Nice to have: Familiarity with LLM multi-agent architectures as well as main stack libraries such as langchain for orchestrating complex agent interactions.
- Nice to have: Prior exposure to backend components like Django web framework, Celery task queue management system, MongoDB document database or PostgreSQL relational database.