Software Engineer

Nebius Data Platform
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Tech stack

API
Amazon Web Services (AWS)
Big Data
C++
Data Structures
Software Debugging
Distributed Data Store
Distributed Systems
Hadoop
Python
Reliability Engineering
Software Engineering
SQL Databases
Parquet
Multithreading
Concurrency
Spark
Integration Tests
Kubernetes
Kafka
Microservices

Job description

We\u2019re looking for a Software Engineer with strong C++ expertise to join the team building and operating Nebius Data Platform \u2014 a distributed storage and a processing platform that acts as the company\u2019s \u201csource of truth\u201d and the backbone of many internal (and some external) products., * Design and implement new functionality in YTsaurus core (C++) with production reliability in mind.

  • Build and evolve platform-level capabilities: platform architecture and operating model\u2014multi-cluster growth, shared primitives, and a consistent experience that scales with new teams and use cases.
  • Improve end-to-end platform experience for internal (and external-facing) users: APIs, guardrails, debugging workflows, and automation.
  • Own production quality: incident response / on-call rotation, root cause analysis, and turning learnings into durable fixes.

Example projects

  • Roll out sharded YTsaurus masters (incl. Kubernetes operator support) and build automatic balancing of metadata across master cells (consensus groups) to remove control-plane bottlenecks and unlock 10\u2013100x cluster growth.
  • Make CHYT interactive SQL faster and more predictable at high load via performance work like data-skipping / min-max-style indexes and improved execution introspection.
  • Turn Orchestracto into a platform product by defining the building blocks, developer experience, and governance for how teams create and share workflows.
  • Scale and harden Parquet-on-S3 for native YTsaurus workloads by tackling replication/movement, consistent lifecycle semantics, and master-server metadata optimizations for performance and reliability.
  • Design and ship complete, trustworthy audit trails for data changes (who/what/when) across heterogeneous storage and compute paths.

Tech stack

  • Core: modern C++ (C++20, async + multithreaded primitives)
  • Services & tooling: Go and Python (microservices, utilities, integration tests)

Requirements

We\u2019re looking for engineers who combine strong systems skills with product sense: understanding who uses the platform, why certain capabilities matter, and making pragmatic trade-offs to maximize impact. On our team, engineering work is expected to be connected to real users and outcomes \u2014 you\u2019ll regularly align with internal stakeholders, clarify requirements, and help drive prioritization., * 5+ years of software engineering experience.

  • Strong C++ skills (you\u2019ll write core code).
  • Working knowledge of Python and/or Go (you don\u2019t have to be expert, but should be comfortable navigating them).
  • Experience developing and/or operating high-load, distributed services.
  • Production mindset: ability to use SSH, read logs/metrics/traces, and debug distributed systems behavior.
  • Solid CS fundamentals: algorithms, data structures, concurrency basics.

Nice to have

  • Experience with Big Data systems (YTsaurus/Hadoop/Spark/ClickHouse/Kafka-like ecosystems).
  • Experience with multi-tenant platforms, schedulers, resource isolation, quotas, and reliability engineering.
  • Strong performance engineering skills (profiling, lock contention, latency/throughput tradeoffs).

We conduct coding interviews as part of the process.

Benefits & conditions

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We\u2019re growing and expanding our products every day. If you\u2019re up to the challenge and are excited about AI and ML as much as we are, join us! ", "Organization": {"Name": null, "Website": null}, "SalaryMin": null, "SalaryMax": null, "ContractType": null, "UserArea": {"validThrough": "2026-03-11T00:49:08", "industry": "IT ICT vacatures", "employmentType": ["FULL_TIME"], "jobLocation": {"@type": "Place", "geo": {"latitude": "", "@type": "GeoCoordinates", "longitude": ""}, "address": {"postalCode": "", "addressCountry": "Nederland", "addressRegion": "Nederland", "@type": "PostalAddress", "addressLocality": "Nederland"}}, "description": ", * Competitive salary and comprehensive benefits package.

  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

About the company

Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field. Where we work Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team., Nebius Data Platform is a single multi-tenant ecosystem based on YTsaurus \u2014 instead of running separate HDFS/Kafka/HBase-style systems, we provide storage, compute, and analytics capabilities inside one platform. Built on top of the open-source YTsaurus ecosystem, we run and extend our own Nebius distribution and develop significant in-house functionality (core and platform-level). We can design, implement, and roll out features end-to-end on our clusters without waiting for upstream approvals and contribute upstream when it makes sense. At scale today, this includes~500 servers, ~20k CPU cores and ~10 PB of compressed data in our largest production cluster, supporting workloads ranging from business-critical pipelines and financial transactions to large-scale ML/LLM training datasets and compute. What\u2019s inside the platform You\u2019ll work on a system that includes (and ties together): * Distributed Storage (Cypress): transactional semantics, tiered storage, erasure coding, replication, and strong reliability expectations. * Compute & ETL: a cluster-wide job scheduler (tens of thousands of cores), MapReduce, YQL for SQL-like data processing, and SPYT (Spark over YTsaurus) for modern data engineering. * Interactive analytics (CHYT): ClickHouse\u00ae instances spun up directly on compute nodes for fast SQL over data in-place. * Dynamic Tables: low-latency NoSQL KV with distributed ACID transactions for OLTP-style workloads and feature stores. * Orchestracto: workflow orchestration deeply integrated with the platform (Airflow-like, but platform-native).

Apply for this position