Backend Engineer - Ingestion
Role details
Job location
Tech stack
Job description
- Thrives on challenges of building systems that process billions of events per day
- Gets excited about designing elegant and efficient systems that can handle terabytes of data without giving people insomnia
- Understands the importance of data integrity and reliability for customers
The ideal candidate has experience with high-throughput data processing systems such as:
- Analytics platforms
- Metric collection systems
- Log aggregation engines
- Streaming and batch-processing pipelines
We use a mixture of Node.JS and Rust for high-throughput processing. We store most of our data in Kafka, PostgreSQL, Clickhouse, S3, and Redis, but with the growing volume of data, we're constantly re-evaluating our technological choices. We're looking for someone who understands the principles of designing distributed systems and can use them to pick the best tools for the job., At PostHog you won't get stuck maintaining an obscure microservice or working in the shadows of the product org, instead, you will:
- Own the entire service from end-to-end: No committees or overzealous PMs, the destiny of the ingestion pipeline will be in your hands.
- Build open-source software: You'll be able to show your Rust-fu to your friends and family (and security researchers too).
- Build in the hot path: Your code will decide whether our customers and engineers have a good time or not.
- Start from first principles: No cookie-cutter solutions here, you'll be safe from AI agents for a good while.
- See immediate results: Small, confident, frequent steps forward - that's how we like to move.
What you'll be doing
Our team is spread across North America and Europe and we're looking for another engineer in Europe or East Coast US.
We're growing very quickly at PostHog, so quickly that the numbers in our job descriptions often get out of date. Our ingestion pipeline is currently processing 10s of billions of events a month and we're hoping to add one more zero to that soon. You'll be responsible for developing the infrastructure to capture all that data, process it reliably, and provide it to other parts of PostHog's platform, such as product analytics, feature flags, CDP, and more.
Requirements
Do you have experience in Slack?, * Experience working with highly scalable, event-driven distributed systems
- You have developed multi-tenant software-as-a-service products
- Experience with Node.JS, Go, Rust, or similar languages
- You have worked with Kafka and PostgreSQL, Redis, or similar systems at scale
- You know how to ship changes quickly without breaking things
Nice to have
- Experience with customer data platforms or similar data analytics systems
- You've carried a pager and have dealt with incidents
- You're comfortable with provisioning and maintaining cloud infrastructure
- Experience with benchmarking and profiling tools
- Knowledge of observability systems and practices
We believe people from diverse backgrounds, with different identities and experiences, make our product and our company better. That's why we dedicated a page in our handbook to diversity and inclusion. No matter your background, we'd love to hear from you! Alignment with our values is just as important as experience!