Staff Data Platform Engineer
Role details
Job location
Tech stack
Job description
Checkout.com is looking for an ambitious Staff Data Engineer to join our Data and AI Platform Team. Our team's mission is to build a platform where you can create reliable, scalable, AI-powered streaming and batch data applications, and share data across Checkout.com to improve business performance.
The Data and AI Platform team is here to ensure internal stakeholders can easily collect, store, process and utilise data to build AI use cases and data products aiming to solve business problems. Our focus is on maximising the amount of time engineers spend on solving business problems and minimising time spent on technical details around implementation, deployment, and monitoring of their solutions.
We're building for scale. As such, much of what we design and implement today is the technology/infrastructure which will serve hundreds of teams and petabyte-level volumes of data., * Work with stream processing technologies (Kafka and Flink) to build a continuously available large-scale event streaming platform
- Leverage subject matter and technical expertise to provide leadership, mentoring, and strategic influence across the organisation whilst building strong relationships with engineers and engineering managers
- Build tooling (modules/SDKs/DSLs) and associated documentation to foster the adoption of the streaming platform by enabling upstream teams and systems to easily publish data and deploy streaming applications
- Implement all the necessary infrastructure to enable end users to build, host, monitor and deploy their own streaming applications
- Provide consultancy across the technology organisation to drive the adoption of the platform and unlock event-driven use-cases
- Participate, translate, run and execute the collection of requirements and architecture/design initiatives into action plans
- Provide hands-on support for all event-based systems including incident triage and root cause analysis
Requirements
While experience with our specific tech stack is a plus, we welcome candidates with a strong background in data systems who are eager to learn. The core remit of this role is to own and scale our event streaming capability, not to serve as a general DevOps or infrastructure engineer.
- Strong presentation and communication skills with a proven track record of influencing engineering organisations
- Strong engineering background with a track record of implementing and owning event streaming platforms
- Hands-on experience working with stream technologies, primarily Kafka, but also Kinesis
- Experience designing and implementing stream processing applications with Flink
- Experience working with cloud-based technologies such as AWS (MSK, S3, Lambda, ECS, SNS)
- Experience with Kubernetes (either self-hosted or on the cloud)
- Experience with SQL databases
- Experience working with Docker, container deployment and management
- Experience describing infrastructure as code (Terraform or similar) as well as designing and implementing CI/CD pipelines
- Excellent programming skills with at least one of Java, Python, Scala or C#