Data Engineer

ETeam Inc
Manor Park, United Kingdom
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 81K

Job location

Manor Park, United Kingdom

Tech stack

Agile Methodologies
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
JIRA
Unix
Continuous Integration
Information Engineering
Data Governance
Data Masking
Identity and Access Management
Python
Meta-Data Management
Cisco Nexus Switches
Performance Tuning
Scrum
SQL Databases
Tokenization
Data Logging
Data Ingestion
Snowflake
Gitlab
PySpark
Data Management
Data Pipelines
Jenkins

Job description

We are seeking a highly skilled Data Engineer with strong hands-on experience across cloud-based data engineering, data pipelines, orchestration frameworks, and modern data platforms. The ideal candidate will have deep expertise in AWS, PySpark, SQL, and workflow orchestration tools, along with strong understanding of Agile delivery., Design, develop, and maintain scalable cloud-native data pipelines for ingestion, transformation, and processing. Build and optimize EDP (Enterprise Data Platform) components using AWS services including S3, Glue, KMS, STS, Step Functions, Lambda, Athena, IAM. Develop transformation logic using dbt, Astronomer, and integrate pipelines with Snowflake. Implement and support data consumption processes using PySpark, Python, Airflow, SQL, GitLab, and Unix. Apply Protegrity for tokenization, encryption, and data protection controls. Implement CI/CD pipelines using Nexus and Jenkins. Collaborate within Agile teams using Jira, contributing to sprint planning, refinement, and delivery. Ensure robust data quality, monitoring, logging, and automation across the data life cycle. Partner with cross-functional stakeholders to optimize data workflows and improve data reliability and performance.

Requirements

Must Have 5+ years of hands-on experience as a Data Engineer. Strong proficiency in AWS EDP components: o S3, Glue, KMS, STS, Step Function, Lambda, Athena, IAM Hands-on experience with dbt, Astronomer, Snowflake. Strong knowledge in: o PySpark o Python o Airflow o SQL o GitLab o Unix Experience implementing Protegrity for data masking/tokenization. CI/CD exposure with Nexus and Jenkins. Strong understanding of Agile methodologies and experience working with Jira.

Good to Have: Experience in building data ingestion frameworks from scratch. Knowledge of data modelling and performance optimization in Snowflake. Exposure to data governance, metadata management, and MDM tools.

Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and stakeholder interaction skills. Ability to work independently and in cross-functional teams. Ownership mindset and strong accountability for deliverables.

Apply for this position