Data Engineer (Spark/Kubernetes) (Financial Services)

Hays plc
Charing Cross, United Kingdom
3 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Shift work
Languages
English
Experience level
Intermediate
Compensation
£ 194K

Job location

Charing Cross, United Kingdom

Tech stack

Java
Agile Methodologies
Big Data
ETL
Distributed Computing Environment
Python
Openshift
Performance Tuning
Scrum
Spark
Spring-boot
PySpark
Kubernetes

Job description

We are seeking a Data Engineer to support the replacement of a Legacy ETL tool with a modern Apache Spark based data platform. This is a hands-on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability., The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid-level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities.

Requirements

  • Strong hands-on experience with Apache Spark - Writing and tuning Spark jobs/PySpark development experience.
  • Strong experience working in with containerised environments using Kubernetes.
  • Experience with programming in Python or Scala
  • Exposure to Big Data technologies and distributed data processing
  • Have some working knowledge of Java or Java Spring boot development background
  • Experienced in an Ops way of working, not pure development only - you know how to deploy solutions.
  • Experience with OpenShift would be highly desirable!
  • Experience working in an Agile way of working (Scrum, sprints, demos)
  • Financial services or professional services experience required.

Apply for this position