Senior Software Engineer - Distributed Systems

Decentriq
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

Artificial Intelligence
Airflow
Big Data
Code Coverage
Information Engineering
Distributed Systems
Job Scheduling
Python
Data Processing
Large Language Models
Spark
Backend
Pandas
PySpark
Kubernetes
Information Technology
Data Pipelines
Databricks
Microservices

Job description

Would you like to help us make the advertising industry ready for the 1st-party era? Then we'd love to hear from you!

  • Own, Design & Operate Data Pipelines - Take full responsibility for all pandas- and Spark-based pipelines, from development through production and monitoring.
  • Advance our ML Models - Improve and productionise models for AdTech use-cases such as look-a-like modelling, audience expansion, and campaign measurement.
  • Engineer for the Invisible - Because data inside confidential enclaves is literally invisible (even to root), build extra-robust validation at the data source, exhaustive test coverage, and self-healing jobs to guarantee reliability.
  • Collaborate Cross-Functionally - Work closely with data scientists, backend engineers (Rust), and product teams to ship features end-to-end.
  • AI-Powered Productivity - Leverage LLM-based code assistants, design generators, and test-automation tools to move faster and raise the quality bar. Share your workflows with the team
  • Drive Continuous Improvement - Profile, benchmark, and tune Spark workloads, introduce best practices in orchestration & observability, and keep our tech stack future-proof.

Requirements

  • (Must have) Bachelor/Master/PhD in Computer Science, Data Engineering, or a related field and 5+ years of professional experience.
  • (Must have) Expert-level Python plus solid hands-on experience with pandas, PySpark/Scala Spark, and distributed-data processing.
  • (Must have) Proven track record building resilient, production-grade data pipelines with rigorous data-quality and validation checks.
  • (Must have) Experience running workloads in Databricks, Spark on Kubernetes, or other cloud/on-prem big-data platforms.
  • (Plus) Working knowledge of ML lifecycle and model serving; familiarity with techniques for audience segmentation or look-a-like modelling is a big plus.
  • (Plus) Exposure to confidential computing, secure enclaves, homomorphic encryption, or similar privacy-preserving tech.
  • (Plus) Rust proficiency (we use it for backend services and compute-heavy client-side modules).
  • (Plus) Data-platform skills: operating Spark clusters, job schedulers, or orchestration frameworks (Airflow, Dagster, custom schedulers).

Benefits & conditions

  • Being able to create, shape, and benefit from a young company.
  • An amazing and fun team that is distributed all over Europe.
  • Competitive salary.
  • A lot of opportunities for self-development.

About the company

Decentriq is the rising leader in data-clean-room technology. With Decentriq, advertisers, retailers, and publishers securely collaborate on 1st-party data for optimal audience targeting and campaign measurement. Headquartered in Zürich, Decentriq is trusted by renowned institutions in the DACH market and beyond, such as RTL Ad Alliance, Publicis Media, and PostFinance.

Apply for this position