Senior DataOps Engineer

dlocal
Barcelona, Spain
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Shift work
Languages
English

Job location

Barcelona, Spain

Tech stack

Airflow
Amazon Web Services (AWS)
Application Layers
Big Data
Cloud Computing
Software Quality
Computer Engineering
Data as a Services
Directed Acyclic Graph (Directed Graphs)
Information Engineering
Data Governance
Data Infrastructure
Software Debugging
DevOps
Distributed Computing Environment
Github
Monitoring of Systems
Python
Octopus Deploy
Prometheus
DataOps
SQL Databases
Parquet
Datadog
CircleCI
Data Logging
Pulumi
Grafana
Spark
Reliability of Systems
Cloudformation
Data Lake
Gitlab-ci
Kubernetes
Information Technology
Data Management
Cloudwatch
Terraform
Docker
Jenkins
Databricks

Job description

As a Senior DataOps Engineer, you'll be a strategic professional shaping the foundation of our data platform. You'll design and evolve scalable infrastructure on Kubernetes, operate Databricks as our primary data platform, enable data governance and reliability at scale, and ensure our data assets are clean, observable, and accessible., * Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently, using Kubernetes and Databricks as core building blocks.

  • Design, build, and maintain Kubernetes-based infrastructure, owning deployment, scaling, and reliability of data workloads running on our clusters.
  • Operate Databricks as our primary data platform, including workspace and cluster configuration, job orchestration, and integration with the broader data ecosystem.
  • Work in improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency across batch and streaming workloads.
  • Build and maintain CI/CD pipelines for data applications (DAGs, jobs, libraries, containers), automating testing, deployment, and rollback.
  • Implement release strategies (e.g., blue/green, canary, feature flags) where relevant for data services and platform changes.
  • Establish and maintain robust data governance practices (e.g., contracts, catalogs, access controls, quality checks) that empower cross-functional teams to access and trust data.
  • Build a framework to move raw datasets into clean, reliable, and well-modeled assets for analytics, modeling, and reporting, in partnership with Data Engineering and BI.
  • Define and track SLIs/SLOs for critical data services (freshness, latency, availability, data quality signals).
  • Implement and own monitoring, logging, tracing, and alerting for data workloads and platform components, improving observability over time.
  • Lead and participate in on-call rotation for data platforms, manage incidents, and run structured postmortems to drive continuous improvement.
  • Investigate and resolve complex data and platform issues, ensuring data accuracy, system resilience, and clear root-cause analysis.
  • Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability.
  • Work closely with the Data Enablement team, BI, and ML stakeholders to continuously evolve the data platform based on their needs and feedback.
  • Stay current with industry trends and emerging technologies in DataOps, DevOps, and data platforms to continuously raise the bar on our engineering practices., * Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools (Dagster, Prefect, Argo Workflows).
  • Familiarity with modern data formats and table formats (e.g., Parquet, Delta Lake, Iceberg).
  • Experience acting as a Databricks admin/developer, managing workspaces, clusters, compute policies, and jobs for multiple teams.
  • Exposure to data quality, data contracts, or data observability tools and practices.

What do we offer? Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you:

  • Flexibility: we have flexible schedules and we are driven by performance.

  • Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity.

  • Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded.
  • Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!

Requirements

  • Bachelor's degree in Computer Engineering, Data Engineering, Computer Science, or a related technical field (or equivalent practical experience).
  • Proven experience in data engineering, platform engineering, or backend software development, ideally in cloud-native environments.
  • Deep expertise in Python or/and SQL, with strong skills building data or platform tooling.
  • Strong experience with distributed data processing frameworks such as Apache Spark (Databricks experience strongly preferred).
  • Solid understanding of cloud platforms, especially AWS and/or GCP.
  • Hands-on experience with containerization and orchestration: Docker, Kubernetes / EKS / GKE / AKS (or equivalent)
  • Proficiency with Infrastructure-as-Code (e.g., Terraform, Pulumi, CloudFormation) for managing data and platform components.
  • Experience implementing CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins, CircleCI, ArgoCD, Flux) for data workloads and services.
  • Experience in monitoring & observability (metrics, logging, tracing) using tools like Prometheus, Grafana, Datadog, CloudWatch, or similar.
  • Experience with incident management: Participating in or leading on-call rotations.
  • Handling incidents and running postmortems
  • Building automation and guardrails to prevent regressions
  • Strong analytical thinking and problem-solving skills, comfortable debugging across infrastructure, network, and application layers.
  • Able to work autonomously and collaboratively.

Apply for this position