Data Engineer

Nconsulting View All Jobs
Glasgow, United Kingdom
2 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 117K

Job location

Glasgow, United Kingdom

Tech stack

Amazon Web Services (AWS)
Amazon Web Services (AWS)
Computer Programming
Information Engineering
Data Integration
ETL
Data Warehousing
Document-Oriented Databases
Identity and Access Management
Python
Performance Tuning
SQL Databases
Data Streaming
Data Processing
Data Ingestion
Snowflake
Spark
PySpark
Semi-structured Data
Data Management
Data Pipelines

Job description

We are seeking an experienced Data Engineer with strong expertise in AWS cloud ecosystem, Snowflake, Python, and Apache Spark, along with proven experience in the banking domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines and modern data platforms that support analytics, reporting, and regulatory requirements. Key Responsibilities Design, build, and maintain scalable data pipelines using AWS services and modern data engineering practices. Develop and optimize ETL/ELT workflows using Python and Apache Spark. Implement and manage Snowflake data warehouse solutions including data modeling, performance tuning, and optimization. Work closely with business stakeholders, data analysts, and architects to understand banking data requirements. Integrate data from multiple banking systems such as payments, transactions, customer, and risk platforms. Ensure data quality, governance, security, and compliance aligned with banking regulations. Develop data ingestion frameworks for structured and semi-structured data. Optimize data processing performance and cost efficiency within AWS environments. Support real-time and batch data processing solutions. Document data architecture, data flows, and technical processes.

Requirements

6+ years of experience in Data Engineering. Strong hands-on experience with AWS services (S3, Glue, Lambda, Redshift, EMR, Athena, Step Functions, IAM). Extensive experience with Snowflake including schema design and performance tuning. Strong programming skills in Python. Hands-on experience with Apache Spark / PySpark. Experience building ETL/ELT pipelines and data integration frameworks. Strong SQL and data modeling skills. Experience working with large-scale datasets.

Apply for this position