Copy of Staff MLOps Engineer (AI/ML Platform)
Role details
Job location
Tech stack
Job description
We're hiring a Staff MLOps Engineer to own the AI/ML platform at Cint. The immediate focus is supporting the Synthetic Data Platform - models for survey augmentation and respondent profiling - but the role's longer-term remit is broader: Trust Score (our respondent quality and fraud detection model) and other AI/ML initiatives need the same platform capabilities. You'll start by reviewing the current setup and deciding whether to extend it or rebuild parts of it, then build out the shared AI/ML platform from there.
The Team
You'll report into our Infrastructure and Data Engineering organisation, working in close partnership with the AI/ML team in Prague. This is deliberately a platform-with-feature-focus role: your day-to-day delivery serves the Synthetic Data team's needs, but your architectural remit covers all of Cint's AI/ML workloads., * Assess and decide on the current pipeline: Audit the existing AI/ML training and serving setup. Decide what's worth building on and what needs to be rebuilt. Make the call and own the rationale.
- Build the shared AI/ML platform: Training infrastructure, experiment tracking, model registry, serving, monitoring. Built once, used by Synthetic, Trust Score, and whatever comes next.
- Oversee the full ML lifecycle: From data ingestion and feature processing to annotation workflows, ensuring the platform facilitates frictionless, rapid model iteration for Data Scientists.
- Own training infrastructure on Databricks and Unity Catalog: Make training fast, reproducible, and traceable. Lineage matters; reproducibility matters more.
- Model serving: Build the serving layer - low-latency APIs, batch scoring jobs, appropriate caching. Integrate with our Java/Spring services.
- Monitoring and drift: Build the observability our models need - data drift, model drift, accuracy regression, business metrics. Grafana dashboards, Prometheus metrics, clear alerts.
- Cost and performance: ML compute costs add up. Set the patterns for cost-effective training and serving, representing ML infrastructure spend and ROI credibly to finance stakeholders.
- Mentor and multiply: Act as a force multiplier by coaching AI/ML and Infrastructure engineers on engineering best practices. You don't just "do" the work; you set the bar for what "good" looks like.
- Drive AI tooling adoption: Model how AI-native development works for platform teams. Claude Code, agentic workflows, AI-assisted incident response.
- Databricks / Spark Native: Comfortable in Databricks. Unity Catalog experience is a strong plus.
- Kubernetes & Cloud: You've deployed ML workloads on Kubernetes. AWS (EKS) is our environment; familiarity is a plus.
- Be a Polyglot: Python, Scala or Java (for Spark), Kubernetes manifests, Terraform. AWS or GCP. You move between layers without friction.
Who You Are
- Deep ML Platform Expertise: You've led ML platform work at a serious scale. You have strong opinions on feature stores, model registries, serving patterns, and what "ML observability" actually means.
- Mature Engineering: You're someone with both a wide and deep background of engineering excellence in a number of disciplines. This is a very senior position in our engineering organisation; setting examples in approach and behaviour is a key trait.
- Systems Architect: You think about the platform as a product with real users (your ML team). You design APIs, write docs, and measure adoption.
- Technical leader: You lead through standards, RFCs, and credibility - not meetings. You've mentored MLOps engineers into senior ICs.
- Pragmatic about buy-vs-build: You know when to adopt a managed service and when to build. You can defend either call to leadership.
- Commercially literate: You can justify platform investment to VP / C-suite and translate business priorities into a roadmap.
Requirements
Do you have experience in Unity?