Senior Platform Engineer

Kernel
17 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 200K

Job location

Remote

Tech stack

Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Google BigQuery
Databases
ETL
Data Systems
Shard (Database Architecture)
DevOps
Elasticsearch
PostgreSQL
Online Analytical Processing
Node.js
Online Transaction Processing
Prometheus
Next.js
TypeScript
Datadog
Pulumi
Tailwind
Autoscaling
Large Language Models
Grafana
Indexer
Backend
Kubernetes
Kafka
Front End Software Development
Cloudwatch
Terraform
Data Pipelines

Job description

Join Kernel as an in-office, backend-savvy Senior Platform Engineer to own and scale our Postgres, AWS, and Kubernetes stack. You'll solve bottlenecks around high-volume writes, wide-attribute data models, replication, and real-time vs analytical data paths while bringing Infrastructure-as-Code (Terraform or Pulumi) practices to productionize and automate our infra.

This role pays £120-200k + equity, with visa sponsorship if needed, via a 4-step hiring flow: intro chat 2-hr take-home onsite deep-dive founder values interview., You're a backend-oriented engineer with strong experience in database infrastructure, IaC, and large-scale data systems. You'll work closely with our product and engineering teams to ensure Kernel's infra can handle:

  • High-volume OLTP writes with immediate read-after-write consistency
  • Thousands of dynamic attributes per entity (wide-table and indexing challenges)
  • Separation of OLTP/OLAP paths with explicit freshness guarantees
  • Replication/sharding at scale
  • Efficient Kubernetes queueing and scheduling
  • Cost optimisation and proactive capacity planning
  • Infrastructure-as-Code to ensure reproducibility and automation
  • Secure and observable infra that meets SOC2 standards

You'll thrive if you enjoy getting deep into DB internals, replication flows, and scaling infra for ML/data-heavy workloads - while still being comfortable in a fast-moving product environment., Kernel processes millions of accounts, petabytes of data, and millions of parallel agent executions every day.

Your role will be to own our infra scaling journey, from Postgres schema design to AWS autoscaling to replication flows, making sure our systems stay fast, reliable, and cost-efficient as we grow., * Implementing IaC (Terraform or Pulumi) across AWS/K8s for reproducible infra and faster iteration

  • Experimenting with schema options for wide-attribute entities and indexing strategies
  • Hardening replication and data movement between Postgres and analytical stores
  • Consolidating and right-sizing K8s queues, with scheduling/guardrails for heavy jobs
  • Driving cost efficiency through infra modelling, observability, and autoscaling
  • Producing key artifacts: problem registers, replication diagrams, freshness matrices, slow-query baselines

This role may not be for you if…

  • You want only pure product feature work - this role leans deep into infra You need rigid structure and long-term roadmaps - many problems are still open research tracks

  • You're heavily indexed on DevOps tooling but lack database/infra depth

This role is definitely not for you if …

  • You prefer fully remote work (this role requires at least 3 days a week in the office)
  • You don't enjoy the intensity of early-stage startup infra firefighting
  • You want to manage, not build, * Core DB: Postgres (JSONB, partitioning, pglogical, replication strategies)
  • Analytics: Redshift, Clickhouse (evaluated for OLTP/OLAP split)
  • Infra: AWS, Kubernetes, Terraform/Pulumi for IaC, n8n for workflow automation
  • Backend: NodeJS, Typescript
  • Front-end: NextJS, Typescript, Tailwind

Requirements

Do you have experience in TypeScript?, * 6+ years of backend / infra engineering experience

  • Deep hands-on experience with Postgres (replication, partitioning, indexing, sharding) at scale
  • Strong background in AWS (Aurora/RDS, S3, EKS/Kubernetes, networking, autoscaling, cost control)
  • Hands-on experience with Infrastructure-as-Code (Terraform, Pulumi, or equivalent)
  • Experience working with large-scale data pipelines, ideally in ML or analytics-heavy products
  • Ability to operate autonomously and propose systemic fixes, not just patchwork

It would be a plus if you also have

  • Experience with analytical stores (Redshift, Clickhouse, BigQuery) alongside OLTP systems
  • Familiarity with search/retrieval infra (ElasticSearch, OpenSearch, vector DBs like Weaviate/Pinecone/FAISS)
  • Observability & monitoring experience (Datadog, Prometheus, Grafana, CloudWatch)
  • Experience in cost modelling, queue scheduling, or infra observability
  • Exposure to multi-cloud (AWS + GCP) environments
  • Hands-on work with event streaming (Kafka, Kinesis, MSK)
  • Some exposure to LLM infra / retrieval systems (RAG, vector DBs, hybrid serving)

Benefits & conditions

We'll do our very best to offer you a ride of a lifetime. It won't be easy, but it will be thrilling.

  • Working directly with the founding team
  • A fast-paced ride in the early inning of a new technology wave
  • Weekly 1:1s to help you grow
  • Salary: £120 - 200k + equity
  • Equity
  • 24 days of holiday per year
  • 2 weeks working from anywhere
  • Pension, * Fara Ashiru, Head of Engineering
  • Sam Houghton, Founding Engineer
  • Eleanor Leung, Sr Engineer
  • David Saltares, Sr Engineer
  • Stefan Sabev, Head of Product
  • Tom Ankers, Sr Engineer
  • Willis Chou, Sr Engineer, A take-home task where you'll be asked to solve a real-world problem we've come across. You'll spend a maximum of 4/5 hours on this challenge.

About the company

Kernel is building the source of truth for enterprise intelligence. RevOps teams at companies like Remote, Navan, Zip, GoCardless, and Cognism use Kernel to clean their CRMs, research companies, and target the right accounts with accuracy. The challenge is massive: ~7B tokens processed per day, 1.8M+ agents running daily, and petabytes of messy data scraped, cleaned, and enriched. AI hallucinations are fine in chat, but not in RevOps - for us, the answer has to be correct. That means our engineering bar is extremely high, because some of the best revenue teams in the world depend on us., Anders (Founder + CEO) & Macus conduct our Founders interview. It's a values-based discussion exploring your personal and professional values and how they align with ours. If you like us and we like you, it'll be time to make you a job offer after reference checks! Compensation Range: £120K - £200K   You must create an Indeed account before continuing to the company website to apply

Apply for this position