E2E AI Engineer
Role details
Job location
Tech stack
Job description
n
As our Senior E2E AI Engineer, you are first and foremost experienced with dev work in production. You are not just prototyping in notebooks; you are architecting the end-to-end infrastructure, devops, and any necessary cloud or data engineering tasks that powers our next-generation AI tools. \n
Your mission is to build the "pipes and plumbing" that make AI agents reliable, observable, and secure. You will own the full stack: from Data Engineering on BigQuery, to API development with FastAPI, to Model Serving on Vertex AI, and finally to DevOps/CI/CD pipelines that keep it all running. You sit at the intersection of Data/Software Engineering and AIOps. \n
You will report to our AI Lead and work closely with the Data and Insights team and leaders across different business units. \n
\n
What you'll do \n \n
- Develop and deploy complex AI workflows (advanced RAG, multi-agent systems) integrated with core productivity tools. \n
- Build robust APIs and backends for RAG Pipelines and AI agents.\n
- Design and maintain scalable data pipelines for ETL, ingestion, and data preparation to ground models in high-quality data.\n
- Manage the testing deployment lifecycle, automating and versioning updates. Orchestrate models on Vertex AI (fine-tuning, serving, monitoring).\n
- Implement AIOps and Agent Observability (tracing, logging, monitoring), while enforcing security for model inputs, outputs, and infrastructure.\n
Requirements
- 6+ years of professional software/data engineering experience, including at least 2 years of production-level hands-on experience in AI development/engineering. Proven experience with advanced RAG use cases, at least one major GenAI framework (e.g., LangChain, LlamaIndex, or Heystack) plus one multi-agent framework (e.g., Langgraph, Google ADK, Vertex AI Agent Builder, or Microsoft Agent Framework).\n
- Expert-level Python (or Typescript) skills. You write clean, modular, production-ready code. You are proficient with Pydantic (or an equivalent in Typescript) for validation and FastAPI for high-performance Async I/O.\n
- Essential expertise in Google Cloud, BigQuery (complex SQL, data warehousing) and Vertex AI (deployment, serving). Strong capability in building ETL/ELT pipelines.\n
- You are comfortable with Git, Docker, and Kubernetes. You are experienced in devops and know how to set up CI/CD pipelines to automate all the necessary testing and deployment. Practical knowledge of securing applications (API security, secrets management, IAM, input sanitization).\n
- Familiarity with MCP and A2A protocols. Hands-on experience with some LLMOps and AI evaluation framework (e.g. GenAI Evals on Vertex, DSPy, RAGAs) as well as high quality vector databases (e.g. Weaviate, Milvus, Qdrant, or Neo4j).\n, * Experience in creative studios or digital production.\n
- Understanding of GDPR, data governance, and AI ethics.\n
- Experience working with fine tuning SLMs and specialized vertical AI agents, n8n, and video generation models.\n
- Experience with machine learning and neural networks.\n