Senior Data Engineer

Kpler London Contract Published: 18 hours ago Competitive
Charing Cross, United Kingdom
4 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

Clean Code Principles
API
Agile Methodologies
Airflow
Amazon Web Services (AWS)
Automation of Tests
Code Review
Computer Programming
Databases
Data Integration
Serialization
Data Systems
Protocol Buffers
Gradle
Java Virtual Machine (JVM)
Python
Maven
NoSQL
Performance Tuning
Query Optimization
Software Engineering
SQL Databases
Data Streaming
Management of Software Versions
Parquet
Data Processing
Test Driven Development
Delivery Pipeline
Spark
Reliability of Systems
GIT
FastAPI
Kubernetes
Avro
Kafka
REST
Terraform
Data Pipelines
Docker
Microservices

Job description

The role is responsible for end-to-end ownership of development work, beginning with a clear understanding of assigned tickets and requirements. This includes designing and implementing functionality across APIs and data processing components, ensuring deployments to development environments are tested and reviewed by peers and product stakeholders. The role emphasizes strong testing practices through unit, integration, and functional tests aligned with defined scenarios, along with thorough validation to ensure compliance and quality. After delivery, the role monitors performance, alerts, and SLOs to ensure the functionality operates reliably and optimally in production. Responsibilities *

  • Deliver well-documented, maintainable code following Test-Driven Development (TDD) principles, ensuring comprehensive unit, integration, and end-to-end testing.
  • Provide technical leadership within the team, helping to elevate engineering capabilities.
  • Design, operate, and document versioned RESTful APIs using FastAPI and JVM-based frameworks, ensuring secure and scalable service delivery.
  • Implement and enforce data schema evolution and versioning strategies to support reliable data integration across systems.
  • Develop and maintain batch and streaming data pipelines using technologies such as Kafka and Spark, managing backpressure, orchestration, retries, and data quality.
  • Take ownership of system performance by improving latency and throughput, applying effective partitioning strategies for databases and Parquet/Iceberg files, defining indexing approaches, and creating efficient query execution plans.
  • Ensure system reliability by instrumenting services with metrics, logs, and traces; contributing to CI/CD pipelines, automated testing, incident response, and root cause analysis.
  • Collaborate closely with Product and cross-functional teams to deliver business outcomes, define test scenarios, and contribute to roadmap planning.
  • Uphold engineering quality through clean code and sound architecture, active participation in code reviews, and adherence to Agile development practices.
  • Provide technical mentorship and guidance to team members, supporting knowledge sharing and engineering excellence.

Requirements

  • 6-8 years of experience in data-focused software engineering roles.
  • Strong programming skills in Scala (or JVM) experience with Python preferred.
  • Proven experience designing and operating RESTful APIs, including secure and versioned interfaces.
  • Solid understanding of data modeling, schema evolution, versioning and serialization technologies such as Avro or Protobuf.
  • Hands-on experience with SQL and NoSQL databases, including query optimization and performance tuning.
  • Experience building and maintaining batch or streaming data systems, with knowledge of streaming patterns and reliability concerns.
  • Familiarity with caching strategies, CI/CD pipelines, and modern monitoring and alerting practices.
  • Proficiency with Git-based workflows, code reviews, and Agile development methodologies.
  • Strong sense of ownership, with pragmatic problem-solving skills, constructive critique and the ability to deliver end-to-end solutions.
  • Excellent communication skills and fluency in English, with the ability to collaborate effectively across product and engineering teams., + Experience with Apache Airflow for workflow orchestration.
  • Exposure to cloud platforms (preferably AWS) and infrastructure as code using Terraform.
  • Experience with Docker and Kubernetes in production environments.
  • Hands-on knowledge of Kafka and event-driven or microservices architectures.
  • Familiarity with JVM build and tooling ecosystems such as Gradle or Maven.

We are a dynamic company dedicated to nurturing connections and innovating solutions to tackle market challenges head-on. If you thrive on customer satisfaction and turning ideas into reality, then you've found your ideal destination. Are you ready to embark on this exciting journey with us?

About the company

At Kpler, we are dedicated to helping our clients navigate complex markets with ease. By simplifying global trade information and providing valuable insights, we empower organisations to make informed decisions in commodities, energy, and maritime sectors. Since our founding in 2014, we have focused on delivering top-tier intelligence through user-friendly platforms. Our team of over 700 experts from 35+ countries works tirelessly to transform intricate data into actionable strategies, ensuring our clients stay ahead in a dynamic market landscape. Join us to leverage cutting-edge innovation for impactful results and experience unparalleled support on your journey to success. Build and maintain Kpler's core datasets (vessels characteristics, companies, geospatial data).You will be responsible for creating and maintaining REST APIs, streaming pipelines (Kafka Stream), and Spark batch pipelines., We build together We foster relationships and develop creative solutions to address market challenges. We are here to help We are accessible and supportive to colleagues and clients with a friendly approach. Our People Pledge Don't meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don't match 100% of the job requirements. Don't let the confidence gap stand in your way, we'd love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team. Kpler is committed to providing a fair, inclusive and diverse work-environment. We believe that different perspectives lead to better ideas, and better ideas allow us to better understand the needs and interests of our diverse, global community. We welcome people of different backgrounds, experiences, abilities and perspectives and are an equal opportunity employer. By applying, I confirm that I have read and accept the Staff Privacy Notice We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Apply for this position