LLM Engineer / AI Engineer

Adria Solutions ltd
Winsford, United Kingdom
7 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
£ 60K

Job location

Winsford, United Kingdom

Tech stack

Artificial Intelligence
Data Governance
Linux
Open Source Technology
Systems Integration
Management of Software Versions
Web Services
Flask
Large Language Models
Prompt Engineering
Generative AI
FastAPI
Low Latency
Machine Learning Operations
Docker

Job description

My client is a fast-growing AI technology company building intelligent systems deployed in real-world, safety-critical environments. Their solutions combine advanced AI, data, and edge technologies to support decision-making and reduce risk in high-hazard industrial settings. They are now looking to hire a LLM Engineer to help design and deliver large language model-powered capabilities across internal platforms and customer-facing products. This is a hands-on, production-focused role for someone with strong real-world LLM experience, not a purely research or experimental background. The Role You will:

  • Design, build and deploy LLM-driven features into live products and platforms
  • Work with both commercial and open-source LLMs, selecting the right model for each use case
  • Build and optimise RAG pipelines, embeddings and vector-based retrieval solutions
  • Develop APIs and services that integrate LLMs with existing AI, data and platform systems
  • Optimise solutions for performance, reliability, latency and cost
  • Collaborate with engineering, AI and product teams to identify and deliver high-value use cases
  • Ensure all solutions meet security, compliance and data governance standards

Requirements

  • Proven experience deploying LLMs in production
  • Strong Python development skills
  • Hands-on experience with:
  • Prompt engineering and evaluation
  • Retrieval-Augmented Generation (RAG)
  • Embeddings and vector databases (e.g. FAISS, Pinecone, Weaviate, Chroma)
  • Experience building LLM-backed services using FastAPI / Flask or similar
  • Strong understanding of trade-offs around accuracy, latency, cost and scalability

Highly Desirable

  • Experience working with or fine-tuning open-source LLMs
  • Familiarity with LLMOps (monitoring, evaluation, guardrails, versioning)
  • Experience integrating LLMs into complex or data-heavy systems
  • Docker and Linux experience
  • Background in regulated, industrial or safety-critical environments

The Right Mindset

  • Pragmatic, delivery-focused and comfortable working with ambiguity
  • Able to translate complex AI concepts into practical solutions
  • Confident owning problems end-to-end, from idea through to deployment
  • Motivated by building AI that has real-world impact

Apply for this position