Data Operations Engineer
Role details
Job location
Tech stack
Job description
As a Data Operations Engineer, you'll play a critical role in the operational backbone of STRATOS. You'll help manage and maintain data pipelines, ensure client environments are configured correctly, and support the accuracy and reliability of the metrics our customers depend on.
You'll balance tasks from multiple teams, requiring strong organisation, clear communication, and attention to detail. This is a hands-on role with real responsibility, offering rapid learning and exposure to production-grade data systems early in your career.
The work will involve using no-code systems in Palantir Foundry, an industry leading enterprise data platform, and writing some code for custom functionality. You will also be tasked to find efficiencies in our internal processes, using AI tools and other systems to improve our way of working and enable us to scale our impact faster.
- Support the smooth operation of multiple client implementations behind the scenes
- Help build, maintain, and monitor data pipelines and integrations across the STRATOS platform
- Work in Python, SQL and no-code analysis tools to transform, validate, and analyse data
- Ensure data accuracy and consistency across complex datasets and workflows
- Investigate and resolve data issues in collaboration with senior engineers
- Coordinate with product, engineering, and implementation teams to prioritise and deliver work
- Contribute to documentation and operational best practices as the platform scales
Requirements
- A degree in a quantitative or technical subject (e.g. Computer Science, Engineering, Mathematics, Physics, Economics, or similar), or equivalent practical experience
- Some hands-on experience with Python and SQL, internships and university projects are relevant to us.
- Strong organisational skills and the ability to juggle multiple tasks and priorities
- Excellent written and verbal communication skills
- High attention to detail and a genuine care for data quality and correctness
- A proactive, ownership-driven mindset and willingness to learn quickly in a fast-moving environment
Nice to Have
-
Exposure to data engineering tools or concepts (e.g. ETL pipelines, data modelling, analytics engineering)
-
Familiarity with tools such as PySpark, Airflow, DBT, or Databricks
-
Experience working with business data such as ERP, CRM, or operational systems
-
Interest in B2B SaaS, data platforms, or AI-enabled products
-
The chance to build a strong technical foundation at a category-defining startup
-
Close mentorship from experienced engineers and operators
-
Exposure to real-world, production-scale data systems early in your career
-
A fast-paced environment where you'll take on meaningful responsibility from day one
Benefits & conditions
- Competitive compensation and the opportunity to grow with the company
- A vibrant and inspiring office in Soho, London
- A mission-driven culture: net-zero on CO and 1% of revenue donated to social impact causes