AI Engineer
Role details
Job location
Tech stack
Requirements
We are seeking Engineers skilled in python with a strong focus on GenAI AI and LLMs to lead the integration of cutting-edge language technologies into real-world applications.
If you're someone passionate about building scalable, responsible, and high-impact GenAI solutions then this could be for you!
We're looking for Engineers offering competent core technical skills in Python Programming, Data Handling with NumPy, Pandas, SQL, and use of Git/GitHub for version control.
Any experience with these GenAI Use Cases would be relevant and desirable; Chatbots, copilots, document summarisation, Q&A, content generation.
To help make your application as relevant as possible, please ensure your CV demonstrates any prior experience you have relating to the below;
System Integration & Deployment
Model Deployment: Flask, FastAPI, MLflow Model Serving: Triton Inference Server, Hugging Face Inference Endpoints API Integration: OpenAI, Anthropic, Cohere, Mistral APIs LLM Frameworks: LangChain, LlamaIndex - for building LLM-powered applications Vector Databases: FAISS, Weaviate, Pinecone, Qdrant (Nice-to-Have) Retrieval-Augmented Generation (RAG): Experience building hybrid systems combining LLMs with enterprise dataMLOps & Infrastructure
MLOps: Model versioning, monitoring, logging Bias Detection & Mitigation Content Filtering & Moderation Explainability & Transparency LLM Safety & Guardrails: Hallucination mitigation, prompt validation, safety layers Azure Cloud Experience Collaboration & Delivery
Cross-functional Collaboration: Working with software engineers, DevOps, and product teams Rapid Prototyping: Building and deploying MVPs Understanding of ML & LLM Techniques: To support integration, scaling, and responsible deployment Prompt Engineering: Designing and optimising prompts for LLMs across use cases Model Evaluation & Monitoring
Evaluation Metrics: Perplexity, relevance, response quality, user satisfaction Monitoring in Production: Drift detection, performance degradation, logging outputs Evaluation Pipelines: Automating metric tracking via MLflow or custom dashboards A/B Testing: Experience evaluating GenAI features in production environments Does this sound like your next career move? Apply today!