Data Engineer - AWS

UBDS Group
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
£ 56K

Job location

Charing Cross, United Kingdom

Tech stack

API
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Azure
Big Data
Cloud Computing
Continuous Integration
Information Engineering
Data Infrastructure
Data Migration
Data Systems
DevOps
Distributed Computing Environment
Distributed Systems
Github
Python
Data Processing
File Transfer Protocol (FTP)
Azure
Multi-Cloud
Event Driven Architecture
Microsoft Fabric
PySpark
Amazon Web Services (AWS)
Kafka
Terraform
Stream Processing
Software Version Control
Data Pipelines
Legacy Systems
Databricks

Job description

UBDS Group is seeking skilled Data Engineers to design, build, and optimise modern data platforms. This role is suited to engineers with strong experience in data pipelines, cloud technologies, and distributed data processing, who can deliver scalable, high-quality data solutions.

You will work as part of a multidisciplinary team to develop reliable data assets that support a range of operational and analytical use cases., * Design, develop, and maintain data pipelines and transformation processes using modern data engineering tools

  • Build and manage batch and real-time data processing solutions
  • Develop curated, reusable data assets to support downstream analytics and reporting
  • Ensure data quality, integrity, and consistency across multiple data sources
  • Work with data ingested via APIs, file-based ingestion (e.g. SFTP), and event streams (e.g. Kafka)
  • Optimise data pipelines for performance, scalability, and cost efficiency
  • Contribute to the development of data platform components and reusable engineering patterns
  • Work within established architecture and engineering standards
  • Collaborate with DevOps engineers, testers, and architects to deliver end-to-end data solutions
  • Support onboarding of new data sources and integration into existing platforms
  • Contribute to knowledge sharing and continuous improvement within engineering teams

Technology StackCore Technologies

  • AWS (primary platform)

  • S3, EMR, Glue

Requirements

Kafka for event-driven data processing Python and PySpark for data engineering and transformation Terraform (Infrastructure as Code) GitHub (version control and CI/CD) Additional / Desirable Technologies

  • Azure data platforms
  • Databricks and/or Microsoft Fabric
  • Multi-cloud architectures and tooling

Skills & ExperienceEssential

  • Experience in data engineering, building and maintaining data pipelines
  • Strong hands-on experience with Python and PySpark
  • Experience working with AWS data services (e.g. S3, EMR, Glue)
  • Experience with event-driven architectures, particularly Kafka
  • Understanding of batch and real-time data processing
  • Experience working with large-scale datasets and distributed systems
  • Familiarity with data quality, validation, and schema management
  • Experience using Terraform and GitHub within a DevOps environment
  • Ability to work within defined engineering standards and best practices

Desirable

  • Experience with Azure, Databricks, or Microsoft Fabric
  • Experience in regulated environments
  • Exposure to data migration from legacy systems
  • Understanding of analytics, reporting, or data product use cases
  • Experience working in collaborative, cross-functional teams

Apply for this position