Head of Developer Productivity
Role details
Job location
Tech stack
Job description
- Own the roadmap and prioritization for AI-powered developer workflows across the full SDLC: autonomous code generation agents, AI planning agents that break down epics into developer-ready tasks, AI code-review tools, AI-assisted test generation, automated vulnerability remediation, and AI-driven incident investigation.
- Act as the product owner for the AI agent ecosystem: define use cases, success metrics, adoption targets, safety policies, and rollout strategy. Measure end-to-end automation rates (from ticket creation to merged PR) and continuously raise the bar.
- Drive the architecture and integration of AI agents with existing developer tools - issue trackers, source control, CI/CD pipelines, IDEs, observability platforms, and the Internal Developer Platform - so that AI is embedded contextually where engineers already work, not bolted on as a side tool.
- Champion the adoption of Model Context Protocol (MCP) and similar standards to connect AI agents with internal documentation, service catalogs, logs, and production data, enabling agents that don't just read code but understand the full operational context.
Engineering Metrics & DORA
- Define, build and operate a unified engineering metrics platform that calculates DORA metrics (Lead Time for Changes, Deployment Frequency, Change Failure Rate, Mean Time to Recovery) from real data across source control, issue tracking and deployment systems, with consistent and auditable definitions.
- Set organization-wide targets for engineering delivery metrics and use them to identify bottlenecks, prioritize investments, and demonstrate the measurable impact of Developer Productivity and AI initiatives.
- Integrate AI-adoption metrics (automation rate, AI-generated vs. human-generated code, agent throughput, first-pass approval rate) into the engineering metrics platform so leadership can track the ROI of AI investments alongside traditional delivery health.
- Deliver self-service dashboards and automated reporting so that every team, vertical, and executive has real-time visibility into their productivity trends.
Developer Experience & Internal Platform
- Own the developer experience strategy: run SPACE-framework developer satisfaction surveys, track Customer Effort Scores, and use qualitative and quantitative signals to continuously improve how engineers interact with the platform.
- Champion the evolution of the Internal Developer Platform (service catalog, self-service actions, golden-path templates, scaffolding) so that creating a new service, deploying to production, or triggering an AI agent is a frictionless, self-service experience.
- Lead the internal frameworks and libraries team, ensuring that shared application frameworks across languages (Go, Java, JavaScript/TypeScript) are standardized, well-maintained, and continuously improved - reducing boilerplate and letting engineers focus on business logic.
- Oversee the CI/CD platform team: GitHub Actions workflows, container builds, GitOps deployments, release governance, and environment management - ensuring pipelines are fast, reliable, and increasingly AI-augmented.
People Management & Enablement
-
Manage, grow and inspire a multi-squad Developer Productivity team (currently ~15 engineers across sub-teams focused on AI agents, internal frameworks, CI/CD tooling, and shared applications). Hire and develop technical leaders who think AI-first.
-
Lead change management and enablement for AI adoption across engineering: create onboarding programs, RFCs, internal talks, documentation, and hands-on workshops that help every team understand and leverage AI tools effectively.
-
Facilitate cross-team collaboration: gather feedback from Engineering Managers, Staff Engineers and ICs; translate pain points into actionable initiatives; communicate trade-offs and decisions clearly to both technical and non-technical stakeholders.
-
Partner with Security and Compliance to embed secure-by-design practices into AI workflows, including clear policies for when AI agents can propose, implement, or auto-merge changes - especially for security-sensitive operations like vulnerability remediation., What do we offer? Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you:
-
Flexibility: we have flexible schedules and we are driven by performance.
-
Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity.
- Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded.
- Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!
Requirements
- 8+ years of experience in Software Engineering, Platform Engineering, DevEx or related roles, with 4+ years in a management or senior leadership position overseeing Developer Productivity, Developer Experience, Internal Developer Platform, or similar teams.
- Strong people-management experience: track record of building, scaling and retaining high-performing engineering teams. Ability to coach engineers and managers toward technical leadership and create an environment where teams ship with high autonomy.
- Demonstrated experience leading AI-powered developer-tools initiatives: you have shipped or product-managed AI coding agents, LLM-based code-review tools, AI planning assistants, or similar developer-facing AI products - not just experimented with them, but driven measurable adoption and impact at scale.
- Deep understanding of DORA metrics and engineering-productivity measurement: you know how to define, instrument, and use Lead Time, Deployment Frequency, Change Failure Rate, and MTTR to drive decisions. Experience building or operating engineering metrics platforms is a strong plus.
- Excellent communication and stakeholder-management skills: comfortable working with engineering leaders, individual contributors, security, product, and AI teams in a fast-moving, global organization.
You will stand out if you have:
- Direct experience building or product-managing autonomous AI software-engineering agents (e.g., systems that take a task description, plan changes, generate code across repositories, run tests, and open pull requests - with human-in-the-loop review and safety guardrails).
- Experience with LLM orchestration patterns for developer tools: model routing, prompt engineering, agent planning phases, tool-use and function-calling architectures, MCP integrations, evaluation frameworks, and cost optimization.
- Experience deploying and operating systems on AWS and Kubernetes/EKS, including GitOps, Argo CD/Workflows, and advanced CI/CD pipelines.
- Strong track record of public speaking or internal evangelism around engineering culture, Developer Experience, or AI-first engineering practices.
- Experience using SPACE framework, developer NPS, and developer-experience surveys to measure and improve engineering satisfaction alongside delivery metrics.